AI startups to watch: Meet the hottest machine learning startups in the UK
The year 2016 saw a host of tech giants acquire AI startups, and 2017 has followed a similar pattern. Apple, Intel, Twitter and Microsoft have all spent large sums to bring artificial intelligence startups, and their expertise, in-house.Four of the biggest AI startup acquisitions of the last five years have come from the UK, starting with Google's purchase of DeepMind in 2014 for a reported £400 million. Since then Apple has purchased Cambridge-based natural language processing specialists VocalIQ, Microsoft has bought the machine learning powered keyboard SwiftKey, and Twitter has acquired Entrepreneur First alumni Magic Ponyhttps://www.techworld.com/picture-gallery/startups/uk-ai-startups-watch-hottest-machine-learning-startups-in-uk-3645606/
Narayana Murthy dismisses artificial intelligence as ‘more hype than reality’
Corporate thought-leader and Infosys co-founder NR Narayana Murthy has flayed the high wage hikes that senior managements have been apportioning to themselves when the software industry is in trying times and has advised them to make “sacrifices” to maintain the common man’s faith in capitalism.http://m.thehindubusinessline.com/info-tech/narayana-murthy-dismisses-artificial-intelligence-as-more-hype-than-reality/article10001743.ece
Getting a new periodic table of elements using AI
"Elementary particles are the building blocks of al matter everywhere in the universe. Their properties are connected with the fundamental forces of nature" Murray Gell Mann Getting a new periodic table of elements using AI Abstract Objective: To obtain an atomic classification based on clustering techniques using non-supervised learning algorithms. Design: The sample of atoms used in the experiments is defined using a set of atomic elements with known properties that are not null for all the individuals of the sample. Different clustering algorithms are used to establish relationships between the elements, getting as result a cluster of atoms related with each other by the numerical values of some of their structural properties. Results: Sets of elements related with the atom that represents each cluster. Keywords: Clustering, atoms, periodic table of elements, unsupervised algorithms, Random Forest, K-Means, K-Nearest Neighbour, Weka, Bayesian Classifier. Introduction The periodic table of elements is an atomic organisation based on two axis. The horizontal axis establishes an increasing order based on the atomic number (number of protons) of each element. The vertical arrangement is managed by the electronic configuration and presents a taxonomic structure designed by the electrons of their latest layer . Furthermore, four main blocks arrange the atoms by similar properties (gases, metals, nonmetals, metalloids). Additionally to the number of protons and the electronic configuration, the atoms are characterised by other attributes that are not ascendant nor cyclic in the periodic table of elements. The values of these properties constitute a sample of numbers that represent different atomic magnitudes that distinguish in some how the chemical elements. In this experiment some of these chemical and physical dimensions have been involved in the training of a set of machine learning algorithms to obtain representative clusters of each element. Research problem The hypothesis of this experiment considers the use of some variants of unsupervised learning models to discover relationships between atomic elements based on a few of chemical and physical matter attributes. Moreover, these techniques calculate clusters of categories based on their numerical attributes. The research problem drives also to an element clustering that could offer a new atomic distribution based on the inferred functions processed by the machine learning processes. The goal is to present an organisation of elements based on the clustering calculation applied on a specific set of atomic properties. Units of analysis The following atomic properties have been used to train and evaluate the unsupervised algorithms: melting point [K], boiling point [K], atomic radius [pm], covalent radius [pm], molar volume [cm3], specific heat [J/(Kg K)], thermal conductivity [W/(m k)], Pauling electronegativity [Pauling scale],first ionisation energy [kJ/mol] and lattice constant [pm]. Only atoms with non null values for each magnitude have been selected in the sample. Notice that some of these properties have not been already discovered or calculated for some atoms that do not appear in the sample. The raw data can be downloaded from this link. The following graphical representation shows how some of these properties are distributed across the spectrum of elements sorted by the ascending number of protons: Graphic 1. Distribution of the melting point, boiling point, lattice constant and the atomic radius versus the atomic number. In this graphic there is not any seeming correlation among the displayed magnitude values and the atomic number. At the first glance there are not correlations nor any pattern between the displayed attributes and the elements upward sorted. Methods The unsupervised machine learning algorithms allow to infer models that identify hidden structure from "untagged" data. Thus no categories are included in the observation and data used to learn can not be used in the accuracy evaluation of the results. Using the machine learning library Java- ML and the non null values for the above specified magnitudes, two exercises were performed: 1 - Clustering of elements The scope of this exercise is to create clusters of atomic elements using three different machine learning techniques provided by the Java-ML library. The result was three atomic configurations based on the following algorithms: K-Means clustering with 10 clusters. This algorithm divides the selected atomic elements into k clusters where each individual is associated to each cluster through the nearest mean calculation. Iterative Multi K-Means implements an extension of K-Means. This algorithm works performing iterations with a different k value, starting from kMin and increasing to kMax, and several iterations for each k. Each clustering result is evaluated with an evaluation score. The result is the cluster with the best score. The applied evaluation in the exercise was the sum of squared errors. K-Means cluster wrapped into Weka algorithms. Classification algorithms from Weka are accessible from within the Java-ML library. An experiment with 3 clusters were calculated just to compare with the first exercise (K-Means with 10 clusters). The results were presented using the TreeMap provided by the d3 - TreeMap graphic library. Graphic 2. Applying K-Means clustering to the sample. 2 - Atomic elements classifications and relationships between themselves. The following exercise was intended to evaluate the degree of relationship among the atoms contained in the sample. Three algorithms were applied: Random Forest with 30 trees to grow and 10 variables randomly sampled as candidates at each split (one for each atomic magnitude). This technique works by constructing a multitude of decision trees at training time and providing the class that is the mode of the classes. Bayesian Classifier. The Naive Bayes classification algorithm has been used to classify the set of elements in different categories. K nearest neighbour (KNN) classification algorithm with KDtree support. The number of neighbours was fixed to 8, considering that this number of potential elements could establish the boundaries for each element positioned in the center of a square (laterals and corners are not managed in the current hypothesis). Graphic 3. Schema of 8 neighbours surrounding the target element Each algorithm worked such as a classifier and they produced a membership distribution with the associated degree evaluation. The classes with a membership evaluation equals to zero were not considered. In this experiment, the physical and chemical attribute values have been clusterized and afterwards, each atom belonging to the same sample, has been classified in the set of the calculated clusters. Therefore, each element is identified with a specific group where the only requirement is that the atom that is being classified must be the representative for the selected category. The calculated clusters have been distributed in pairs of atoms with their corresponding degree evaluation following this structure: [Xi, Yj, Ej] Where Xi is each atom in the sample, Yj is each element in the category Y and Ej the related degree evaluation to the pair. The relationships between the individuals and their categories are shownthrough the chord graphic representation based on the Chord Viz component provided by d3. Graphic 4. Nitrogen relationships considering the evaluation of different classifiers Results The three tree maps (one per clustering algorithm) where the chemical elements have been organised, are showing interesting groups of components. For instance all of them include in the same group the S and the Se. Other atoms (all of them gases) such as the Ne, Ar, Kr and Xe are also enclosed in the same group by all the algorithms (remember that neither the atomic number nor the electronic configuration were included in the models). It is interesting to mention that the configuration generated by the two K-Means algorithms are presenting the H and the Li in a separated and mono-element clusters. Regarding the weighted relationships between the elements, a chord graphic has been created for each machine learning algorithm. This data representation shows how the atomic elements can be related with each other through unsupervised machine learning techniques taking some of their chemical and physical properties and assigning a relational degree to them. There are some interesting behaviours such as the set of relationships found for the Nitrogen. The Random Forest algorithm determined that the O, Ne and Ar are highly related, the Bayesian Classifier calculated that only the Oxygen was related and the results of the K-NN method evaluated that the O, Ne, Cl, Ar, Br, Kr and the I are related when the number of neighbours was fixed to 8. Some familiar associations can be found in the calculated relationships when comparing the components in the clusters and their distribution in the periodic table of elements. Nevertheless, other non evident atomic relations have been set up by these methods. Additionally, the non commutative property is a remarkable characteristic. For instance the Nitrogen is not related in the reverse way with the Hydrogen when they are selected in the results calculated using the Random Forest algorithm. Conclusions Although the calculated atomic organisation through the machine learning algorithms are not following any physic or chemical rule, some associations arise creating groups of components that follow similar configurations like the provided by the periodic table of elements. Beyond the calculated results, the applied library (Java-ML) and the used algorithms, the exercise is interesting by itself. The proof that chemical or physical relationships can be stablished among the elementary components based on the similarity of their properties using machine learning can drive to new lines of research. Acknowledgments I want to thank to Montse Torra her task gathering the physical and chemical properties for each used atom in the sample. References Bostjan Kaluza. "Machine Learning in Java". Packt Publishing Ltd, Apr 29, 2016 Eibe Frank, Mark A. Hall, and Ian H. Witten. "The WEKA Workbench". Online Appendix for “Data Mining: Practical Machine Learning Tools and Techniques”. Morgan Kaufmann, Fourth Edition, 2016 Physical and chemical atomic properties extracted from WebElements and PeriodicTable.
Very original application, Toni - thanks for sharing
What Is The Difference Between Artificial Intelligence And Machine Learning?
Artificial Intelligence (AI) and Machine Learning (ML) are two very hot buzzwords right now, and often seem to be used interchangeably.They are not quite the same thing, but the perception that they are can sometimes lead to some confusion. So I thought it would be worth writing a piece to explain the difference.https://www.google.co.uk/amp/s/www.forbes.com/sites/bernardmarr/2016/12/06/what-is-the-difference-between-artificial-intelligence-and-machine-learning/amp/
The new CxO gang: data, AI, and robotics
The new CxO gang: data, AI, and robotics Hiring new figures to lead the data revolution It has been said that this new wave of exponential technologies will threaten a lot of jobs, both blue and white-collar ones. But if from one hand many roles will disappear, from the other hand in the very short-term we are observing new people coming out from the crowd to lead this revolution and set the pace. These are the people who really understand both the technicalities of the problems as well as have a clear view of the business implications of the new technologies and can easily plan how to embed those new capabilities in enterprise contexts. Hence, I am going to briefly present three of them, i.e., the Chief Data Officer (CDO), the Chief Artificial Intelligence Officer (CAIO) and the Chief Robotics Officer (CRO). Sad to be said, I never heard about a ‘Chief of Data Science’, but for some strange reasons, the role is usually called either ‘Head of Data Science’ or ‘Chief Analytics Officer’ (as if data scientist won’t deserve someone at C-level to lead their efforts). Let’s see then who they are and what they would be useful for. The Chief Data Officer (CDO) A slide taken from one of the speakers at the CDO Summit in London illustrating business drivers and capabilities and how they related to the CDO job. Apparently, it is a new role born in a lighter form straight after the financial crisis springing from the need to have a central figure to deal with technology, regulation and reporting. Therefore, the CDO is basically the guy who acts as a liaison between the CTO(tech guy) and the CAO/Head of Data Science (data guy) and takes care of data quality and data management. Actually, its final goal is to guarantee that everyone can get access to the right data in virtually no time. In that sense, a CDO is the guy in charge of ‘democratizing data’ within the company. It is not a static role, and it evolved from simply being a facilitator to being a data governor, with the tasks of defining data management policies and business priorities, shaping not only the data strategy, but also the frameworks, procedures, and tools. In other words, he is a kind of ‘Chief of Data Engineers’ (if we agree on the distinctions between data scientists, who actually deal with modeling, and data engineers, who deal with data preparation and data flow). “The difference between a CIO and CDO (apart from the words data and information…) is best described using the bucket and water analogy. The CIO is responsible for the bucket, ensuring that it is complete without any holes in it, the bucket is the right size with just a little bit of spare room but not too much and its all in a safe place. The CDO is responsible for the liquid you put in the bucket, ensuring that it is the right liquid, the right amount and that’s not contaminated. The CDO is also responsible for what happens to the liquid, and making the clean vital liquid is available for the business to slake its thirst.” (Caroline Carruthers, Chief Data Officer Network Rail, and Peter Jackson, Head of Data Southern Water)” Interestingly enough, the role of the CDO as we described it is both verticaland horizontal. It spans indeed across the entire organization even though the CDO still needs to report to someone else in the organizational chart. Who the CDO reports to will be largely determined by the organization he is operating in. Furthermore, it is also relevant to highlight that a CDO can be found more likely in larger organizations rather than small startups. The latter type is indeed usually set up to be data-driven (with a forward-looking approach) and therefore the CDO function is already embedded in the role who designs the technological infrastructure/data pipeline. It is also true that not every company has a CDO, so how do you decide to eventually get one? Well, simply out of internal necessity, strict incoming regulation, and because all your business intelligence projects are failing because of data issues. If you have any of these problems, you might need someone who pushes the “fail-fast” principle as the data approach to be adopted throughout the entire organization, who considers data as a company asset and wants to set the fundamentals to allow fast trial and error experimentations. And above all, someone who is centrally liable and accountable for anything about data. A CDO is then the end-to-end data workflow responsible and it oversees the entire data value chain Finally, if the CDO will do his job in a proper way, you’ll be able to see two different outcomes: first of all, the board will stop asking for quality data and will have clear in mind what every team is doing. Second, and most important, a good CDO aims to create an organization where a CDO has no reasons to exist. It is counterintuitive, but basically, a CDO will do a great job when the company won’t need a CDO anymore because every line of business will be responsible and liable for their own data. A good CDO aims to create an organization where a CDO has no reasons to exist. In order to reach his final goal, he needs to prove from the beginning that not investing in higher data quality and frictionless data transfer might be a source of inefficiency in business operations, resulting in non-optimized IT operations and making compliance as well as analytics much less effective. The Chief Artificial Intelligence Officer (CAIO) If the CDO is somehow an already consolidated role, the CAIO is nothing more than a mere industry hypothesis (not sure I have seen one yet, although the strong ongoing discussions between AI experts and sector players— see here and here for two opposite views on the topic). Moreover, the creation of this new role highlights the emergence of two different schools of thought of enterprise AI, i.e., centralized vs decentralized AI implementation, and a clear cost-benefit analysis to understand which approach will work better is still missing. My two cents are that elevating AI to be represented at the board level means to really become an AI-driven company and embed AI into every product and process within your organization—and I bet not everyone is ready for that. So, let’s try to sketch at a glance the most common themes to consider when talking about a CAIO: Responsibilities (what he does): a CAIO is someone who should be able to connect the dots and apply AI across data and functional silos (this is Andrew Ng’s view, by the way). If you also want to have a deeper look at what a CAIO job description would look like, check out here the article by Tarun Gangwani; Relevance (should you hire a CAIO?): you only need to do it if you understand that I is no longer a competitive advantage to your business but rather a part of your core product and business processes; Skills (how do you pick the right guy?): first and more important, a CAIO has to be a ‘guiding light’ within the AI community because he will be one of your decisive assets to win the AI talent war. This means that he needs to be highly respected and trusted, which is something that comes only with a strong understanding of foundational technologies and data infrastructure. Finally, being a cross-function activity, he needs to have the right balance between willingness to risk and experiment to foster innovation and attention to product and company needs (he needs to support different lines of business); Risks (is a smart move hiring a CAIO?): there are two main risks, which are i) the misalignment between technology and business focus (you tend to put more attention on technology rather than business needs), and ii) every problem will be tackled with AI tools, which might not be that efficient (this type of guys are super trained and will be highly paid, so it is natural they will try to apply AI to everything). Where do I stand on that? Well, my view is that a CAIO is something which makes sense, even though only temporarily. It is an essential position to allow a smooth transition for companies who strive for becoming AI-driven firms, but I don’t see the role to be any different from what a smart tech CEO of the future should do (of course, supported by the right lower management team). However, for the next decade having a centralized function with the task of using AI to support the business lines (50% of the time) and foster innovation internally (50% of the time) it sounds extremely appealing to me. In spite of all the predictions I can make, the reality is that the relevance of a CAIO will be determined by how we will end up approaching AI, i.e., whether it will be eventually considered a mere instrument(AI-as-a-tool) or rather a proper business unit (AI-as-a-function) The Chief Robotics Officer (CRO) We moved from the CDO role, which has been around for a few years now, to the CAIO one, which is close to being embedded in organizational charts. But the Chief Robotics Officer is a completely different story Even if someone is speaking about the importance of it (check out this report if you like), it is really not clear what his tasks would be and what kind of benefits would bring to a company, and envisaging this role requires a huge leap of imagination and optimism about the future of work (and business). In few words, what a CRO will be supposed to take care of is managing the automated workforce of the company. To use Gartner’s words, ‘he will oversee the blending of human and robotic workers’. He will be responsible of the overall automatization of workflows and to integrate them smoothly into the normal design process and daily activities. I am not sure I get the importance of this holistic approach to enterprise automation, although I recognize the relevance of having a central figure who will actively keep track and communicate to employees all the changes made in transforming a manual activity/process into an automated one. Another interesting point is who the CRO will report to, which is of course shaped by his real functions and goals. If robotics is deeply routed into the company and allows to create or access new markets, a CRO might directly report to the CEO. If his goal is instead to automatize internal processes to achieve a higher efficiency, he will likely report to the COO or to a strategic CxO (varying on industry and vertical). My hypothesis is that this is going to be a strategic role (and not a technical one, as you might infer from the name) which, as the CAIO, might have a positive impact in the short term (especially in managing the costs of adopting early robotics technologies) but no reason to exist in the longer term. It is easier to think about it in physical product industries rather than digital products or services companies, but automation will likely happen in a faster way in the latter, so we will end up having a Chief of Physical Robotics Officer (to manage the supply chain workflow) as well as a Chief of Digital Robotics Officer (to manage instead the automation of processes and activities).
Thanks Chris - a bit speculative in some points but I think useful to at least start a conversation
Data.Science Start up in India: EdGE Networks
With artificial intelligence (AI) starting to impact many aspects of our personal and working lives, it’s only natural that it should be used in one of the most challenging areas of corporate operations — human resources.Indian startup EdGE Networks is hoping that its With artificial intelligence (AI) starting to impact many aspects of our personal and working lives, it’s only natural that it should be used in one of the most challenging areas of corporate operations — human resources.Indian startup EdGE Networks is hoping that its technology will help bring AI into the human resources sector. It has developed a system that uses AI and data science to help companies hire the best people and manage their workforce, talent acquisitions. will help bring AI into the human resources sector. It has developed a system that uses AI and data science to help companies hire the best people and manage their workforce, talent acquisitions.https://edgenetworks.in/2017/08/24/startup-profile-edge-networks/
AI WILL MAKE FORGING ANYTHING ENTIRELY TOO EASY
“Lordy, I hope there are tapes,” said an exasperated James Comey in his testimony before the Senate Intelligence Committee on June 8. Comey’s desire reflects a familiar one for individuals accused of lying when the stakes are high. The former FBI director wished for tapes because, in our society, audio and video recordings serve as a final arbiter of truth. He said, she said always loses to what the tape shows.Today, when people see a video of a politician taking a bribe, a soldier perpetrating a war crime, or a celebrity starring in a sex tape, viewers can safely assume that the depicted events have actually occurred, provided, of course, that the video is of a certain quality and not obviously edited.But that world of truth—where seeing is believing—is about to be upended by artificial intelligence technologies.https://www.wired.com/story/ai-will-make-forging-anything-entirely-too-easy/