What you are too afraid to ask about Artificial Intelligence Part II
This article follows the first piece on machine learning describing how AI interacts with neuroscience, as well as how hardware and chips are getting created and modified to be more efficient for specific AI algorithm
I highly recommend reading the work of Numenta, which gives a new perspective on neuroscience and the biological approach to AI
Congratulations to Francesco Corea, the winner of the March Data Science Foundation Contributor Competition
Congratulations to Francesco Corea, the winner of the March Data Science Foundation Contributor Competition. The March Contributors Competition is now closed, the winner of the £50 Amazon gift voucher is Francesco Corea. Thank you all for submitting article and papers. The April competition is now open.
Thanks Chris for the shout-out
Advances in Data Science 2018: Final Speakers & Discussion Themes
The University of Manchester's Data Science Institute is delighted announce the conference’s final speaker line up and the themes to be discussed our upcoming Advances in Data Science Conference on May 21st and May 22nd in Manchester. “Focusing on Gaussian processes, Deep learning, latent variable models, subspace learning, network models, spatio-temporal models and longituinal data, we will explore the ways in which these methodologies can be used to address challenges faced by those working in the key application areas: Health - Security - Criminology - Discrimination/Bias - Politics - Demographics - Urban Planning - Global Challenges - Social Media – Conservation” The full conference schedule will be available on the Advances in Data Science website on March 30th 2018.
The speaker line-up for the 2018 Advances in Data Science Conference Manchester has been announced and booking are being taken.
Human Resources for Big Data Professions: A systematic Classification of Job Roles and Required Skill Sets
An excellent paper, well written and well presented. Looking forward to seeing more like this from Andrea De Mauro et al
Finding my dream "connected" home
So I'm thinking of moving house, Yes, I know that doesn't sound like a matter for DataTalk. But hear me out. What I'm wondering is, do I need to think about whether the place I move into is future-proofed in terms of the way technology and data are being integrated into the fabric of our homes? And – by extension – is this something that estate agents, the construction, building, and refurbishment industries, and even local government need to look at more closely? I've lived in early 20th century buildings where there weren't enough plug sockets to connect television, VCR and so on without recourse to a potentially- inflammatory daisy chain of adapters. I’ve lived in flats where the phone line wouldn't allow bandwidth for broadband, and the cable company wouldn’t even consider bothering to lay down an optical fibre. I’ve tried to “work from home” in digs where there wasn’t even GPRS, let alone a reliable 3G mobile signal. And I’m not talking about rural England here. I’m talking city life. So, when I finally move into my “forever home”, I want to make sure I’m not going to spend the rest of my life bewailing the fact that – for me – the Internet of Things is more like the Internet of Nothing-But-Trouble. I’ve divided my “need to have” and “nice to have” lists into two categories; fabric and location. In terms of fabric I want to make sure that my new place has plenty of hidden space for cabling. Yes, I know wifi is king right now, but with concerns about data security and hacking becoming more and more pertinent, I want to know that if the future means hardwired connections then I won’t have to spend my life tripping over – or wallpapering over – cables. And I want the rooms to be a shape which the hidden cameras and sensors which make my sentient home work can cope with…no blind spots please, no unreachable spaces. And, on that topic, I want my robots to be able to move around safely. Do I need durable, even floors and a minimum of stairs. Do I need wide doorways, sliding doors? And if wifi does remain a big thing then I’ll want to make sure I don’t have to share my network bandwidth with the street, so I’ll want outer walls with data insulation. But I don’t want to block my 3G signal…. In terms of location I want to be able to get all that optic fibre, 5G cellular coverage and so on. And I want my home internet of things to be able to work with the outside internet of things while I’m not there. So I need to learn more about the layout of the surrounding area too. Do I need to live near a telephone exchange? Do I need to think about where self-driving cars will be able to operate? Drone landing areas? Perhaps most of all, I need to think about what to do if the Internet of Things “goes down”. Do I need the back-up infrastructure of the past, such as old-school timers and switches for my heating, or will I opt for clean walls and rely on my mobile apps? Is the future one where future is overlaid onto past, or one where the past is stripped away to make room for the future? I wonder if today’s housebuilders and town planners are asking the same questions of themselves, as they get ready to greenlight and build my dream home. I hope so, because otherwise I foresee a whole lot of unhappy families spending a fortune on making their homes fit-for-purpose in the same way I recall seeing friends shelling out a fortune to rewire their homes in the past when the electricity man realised their circuits were outdated and unsafe, colleagues returning their fancy new smartphones when they discovered they weren’t very smart when they took them home to the phone-mast desert, and so on. My gut feeling is that – metaphorically - I want to keep my old chimney and fireplace, even if I’ve decided to have modern electric heaters fitted. Then I know I can always light a fire in the grate if the clock on the central heating packs up. I think a dual-layer system is the way to go. But I’m not an expert and I haven’t got a research grant to investigate. Maybe I need to look for more data...
An interesting point. I suggest that this is an issue to be considered with the next draft of the building regulations
What are the most leftfield examples of data mining for me to use in a presentation?
I recently started working in the data science sector and friends regularly ask me "what's it all about". My son even asks me what it's like to work down a data mine. After giving the usual explanations about using vast data sets and machine learning to improve outcomes for businesses, research, and medicine, I'm met with looks of veiled pity that say "It must be pretty boring." But, as you and I know, it isn't. It's the future. So what are the real-world examples I should be giving to really make my party chatter fizz? Is it the potential to become a poker millionnaire? The opportunity to find out what makes the ideal husband? Or the ideal mix of gin and tonic? Please, lend me your wisdom...before my data-ignorant friends fall asleep on me!
Students launch Machine Learning Society at Imperial
Undergraduates Harry Berg (Mechanical Engineering) and Haron Shams (Design Engineering) have set up the Imperial College Machine Learning Society to get students involved in and inspired by technology that’s going to change the world.It is interesting that this society was planned by two undergraduate students and that the first event attracted 250 attendees, with over half of them being PhD students.
Your First Machine Learning Project in Python Step-By-Step
Do you want to do machine learning using Python, but you’re having trouble getting started?In this post, you will complete your first machine learning project using Python.In this step-by-step tutorial you will:Download and install Python SciPy and get the most useful package for machine learning in Python.Load a dataset and understand it’s structure using statistical summaries and data visualization.Create 6 machine learning models, pick the best and build confidence that the accuracy is reliable.If you are a machine learning beginner and looking to finally get started using Python, this tutorial was designed for you.https://machinelearningmastery.com/machine-learning-in-python-step-by-step/
Good tutorial Saroj, thanks for sharing it
Artificial Intelligence Explained
Artificial Intelligence Explained Basic definitions and categorization I. Overview Artificial Intelligence (AI) represents nowadays a paradigm shift that is driving at the same time the scientific progress as well as the industry evolution. Given the intense level of domain knowledge required to really appreciate the technicalities of the artificial engines, what AI is and can do is often misunderstood: the general audience is fascinated by its development and frightened by terminator-like scenarios; investors are mobilizing huge amounts of capital but they have not a clear picture of the competitive drivers that characterize companies and products; and managers are rushing to get their hands on the last software that may improve their productivities and revenues, and eventually their bonuses. Even though the general optimism around creating advancements in artificial intelligence is evident (Muller and Bostrom, 2016), in order to foster the pace of growth facilitated by AI I believe it would be necessary to clarify some concepts. II. Basic Definitions Categorization First, let’s describe what artificial intelligence means. According to Bostrom (2014), AI today is perceived in three different ways: it is something thatmight answer all your questions, with an increasing degree of accuracy (“the Oracle”); it could do anything it is commanded to do (“the Genie”), or it might act autonomously to pursue a certain long-term goal (“the Sovereign”). However, AI should not be defined by what it can do or not, and thus a broader definition is appropriate. An artificial intelligence is a system that can learn how to learn, or in other words a series of instructions (an algorithm) that allows computers to write their own algorithms without being explicitly programmed for. Although we usually think about intelligence as the computational part of our ability to achieve certain goals, it is rather the capacity to learn and solve new problems in a changing environment. In a primordial world then, it is simply the attitude to foster survival and reproduction (Lo, 2012; 2013; Brennan and Lo, 2011; 2012). A living being is then defined as intelligent if she is driving the world into states she is optimizing for. No matter how accurately we defined this concept, we can intuitively understand that the level of intelligence machines are provided with today is years far from the average level of any human being. While human being actions proceed from observing the physical world and deriving underlying relationships that link cause and effect in natural phenomena, an artificial intelligence is moved entirely by data and has no prior knowledge of the nature of the relationship among those data. It is then “artificial” in this sense because it does not stem from the physical law but rather from pure data. We then have just defined what artificial intelligence is and what mean to us. In addition to that, though, there are two other concepts that should be treated as part of this introduction to AI: first of all, how AI is different and/or related to other buzzwords (big data, machine learning, etc.); second, what features a system has to own to be defined intelligent. We think of AI as an interdisciplinary field, which covers (and requires) the study of manifold sub-disciplines, such as natural language processes, computer vision, as well as Internet of things and robotics. Hence, in this respect, AI is an umbrella term that gathers a bucket of different aspects. We can somehow look at AI to be similar to a fully-functional living being, and we can establish comparisons to figure out the degree of relationship between AI and other (sub)fields. If AI and the human body are alike, it has to possess a brain, which carries out a variety of tasks and is in charge of specific functions such the language (NLP), the sight (computer vision), and so on so forth. The body is made of bones and muscles, as much as a robot is made by circuits and metals. Machine learning can be seen as specific movements, action or thoughts we develop and that we fine-tune by doing. The Internet of things (IoT) corresponds to the human senses, which is the way in which we perceive the world around us. Finally, big data is the equivalent of the food we eat and the air we breathe, i.e., the fuel that makes us tick, as well as every input we receive from the external world that is captured by our senses. It is a quite rough comparison, but it conveys a simple way on how all the terms are related to each other. Although many other comparisons may be done, and many of them can be correct simultaneously, the choice of what kind of features a system should have to be a proper AI is still quite controversial. In my opinion, the system should be endowed with a learning structure,an interactive communication interface, and a sensorial-like input digestion. Unfortunately, this idea is not rigorous from a scientific point of view, because it would involve a series of ethical, psychological, and philosophical considerations that should be taken into account. III. 3 Types of AI Instead of focusing much longer on this non-provable concept, I rather prefer to illustrate how those characteristics would reflect the different types of AI we are (and we will) dealing with. An AI can indeed be classified in three ways: a narrow AI, which is nothing more than a specific domain application or task that gets better by ingesting further data and “learns” how to reduce the output error. An example here is DeepBlue for the chess game, but more generally this group includes all the functional technologies that serve a specific purpose. These systems are usually quite controllable because limited to specific tasks. When a program is instead not programmed for completing a specific task, but it could eventually learn from an application and apply the same bucket of knowledge to different environments, we face an Artificial General Intelligence (AGI). This is not technology-as-a-service as in the narrow case, but rather technology-as-a-product. The best example for this subgroup is Google DeepMind, although it is not a real AGI in all respects. We are indeed not there yet because even DeepMind cannot perform an intellectual task as a human would. In order to get there, much more progress on the brain structure functioning, brain processes optimization, and portable computing power development have to be made. Someone might think that an AGI can be easily achieved by piling up many narrow AIs, but in fact, this is not true: it is not a matter of number of specific skills a program can carry on, but rather the integration between all those abilities. This type of intelligence does not require an expert to work or to be tuned, as it would be the case for narrow AI, but it has a huge limitation: at the current state, it can be reached only through continuously streaming an infinite flow of data into the engine. The final stage is instead called Superintelligent AI (ASI): this intelligence exceeds largely the human one, and it is able of scientific and creative thinking; it is characterized by general common wisdom; it has social skills and maybe an emotional intelligence. Although we often assume this intelligence to be a single super computer, it is more likely that it is going to be made by a network or a swarm of several intelligences. The way in which we will reach the different stages is though still controversial, and many schools of thoughts exist. The symbolic approach claims that all the knowledge is symbolic and the representation space is limited, so everything should be stated in formal mathematical language. This approach has historically analyzed the complexity of the real world, and it had suffered at the same time from computational problems as well as understanding the origination of the knowledge itself. The statistical AI instead focuses on managing the uncertainty in the real world (Domingos et al., 2006), which lives in the inference realm contrarily to the more deductive logical AI. On a side then, it is not clear yet to what degree the human brain should be taken as an example: biological neural network seems to provide a great infrastructure for developing an AI, especially regarding the use of sparse distributed representations (SDRs) to process information. How Does AI Compare to Humans? So the natural question everyone is asking is “where machines stand with respect to humans?” Well, the reality is that we are still far from the point in which a superintelligence will exceed human intelligence—the so-called Singularity (Vinge, 1993). The famous futurist Raymond Kurzweil proposed in 1999 the idea of the law of accelerating returns, which envisages an exponential technological rate of change due to falling costs of chips and their increasing computational capacity. In his view, the human progress is S-shaped with inflection points corresponding to the most relevant technological advancements, and thus proceeds by jumps instead of being a smooth and uniform progress. Kurzweil also borrowed Moore’s law to estimate accurately the precise year of the singularity: our brain is able of 10¹⁶ calculations per second (cps) and 10¹³ bits of memory, and assuming Moore’s law to hold, Kurzweil computed we will reach an AGI with those capabilities in 2030, and the singularity in 2045. I believe though this is a quite optimistic view because the intelligence the machines are provided with nowadays is still only partial. They do not possess any common sense, they do not have any sense of what an object is, they do not have any earlier memory of failed attempts, they are not conscious - the so-called the “Chinese room” argument, i.e., even if a machine can perfectly translate Chinese to English and vice versa, it does not really understand the content of the conversation. On the other side, they solve problems through structured thinking, they have more storage and reliable memory, and raw computational power. Humans instead tried to be more efficient and select ex-ante data that could be relevant (at the risk of losing some important information), they are creative and innovative, and extrapolate essential information better and faster from only a few instances, and they can transfer and apply that knowledge to unknown cases. References Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies, OUP Oxford. Brennan, T. J., Lo, A. W. (2011). “The Origin of Behavior”. Quarterly Journal of Finance, 7: 1043–1050. Brennan, T. J., Lo, A. W. (2012). “An Evolutionary Model of Bounded Rationality and Intelligence”. PLoS ONE 7(11), e50310. Domingos, P., Kok, S., Poon, H., Richardson, M., Singla, P. (2006). “Unifying logical and statistical AI”. Proceeding of the 21st National Conference on Artificial Intelligence, 1: 2–7. Lo, A. W. (2012). “Adaptive Markets and the New World Order”. Financial Analysts Journal, 68(2): 18–29. Lo, A. W. (2013). “The Origin of Bounded Rationality and Intelligence”. Proceedings of the American Philosophical Society, 157(3): 269–280. Müller, V. C., Bostrom, N. (2016). “Future progress in artificial intelligence: A Survey of Expert Opinion”, in Vincent C. Müller (ed.): Fundamental Issues of Artificial Intelligence, Springer: 553–571. Vinge, V. (1993). “The Coming Technological Singularity: How to Survive in the Post-Human Era”. In NASA. Lewis Research Center, Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace: 11–22. This article is an excerpt of my book “Artificial intelligence and exponential technologies: business models evolution and new investment opportunities”, edited by Springer.
Top 4 Data Science Trends to Watch in 2018
As 2017 comes to a close, it’s time to look forward at what’s ahead in the wide world of data science. While last year was the year that the idea of deep learning really began to make its way into the mainstream, the coming year will be about how to make deep learning better, faster, and stronger (but not “harder” - in fact, the goal is quite the opposite. Sorry, Daft Punk).https://blog.dataiku.com/top-4-data-science-trends-to-watch-in-2018