Aim

I am a Node on the Edge because I cannot claim to have all the answers or a decade long history of experience but I can help to define questions to be solved. Hopefully their is a potential finite set of questions that allow most requirements be solved and my aim is define these with collaboration of the community.
It would be amazing if these questions developed into some kind of NP Complete space where they all referred to one question such in NP Complete Theory or Monty Pythons Holy Grail but that is/maybe asking too much?

Friday 19 May 2017

Node observing two collaborative Edges - From ARM to Machine Learning

               

       This was a comparison of human and ML ability, a history lesson in innovation and a exposure to a significant collaboration of Hermann Hauser and Steve Furber. This collaboration proves and has proved many things. From Acorn computers to ARM this country and industry would be far shrunken without this. From the time when Hardware ruled and Software was something that displayed the capabilities of the hardware performance abilities. To as Hermann put it a somewhat unheard of and unique case of commercialising a technique, reduced instruction set computer RISC chip manufacture design, largely started or defined in the US while capitalising on it in the UK and Europe. 
            It is still despite the longevity of their collaboration obvious to observe Hermann's glee, exasperation and surprise having found a perpetual motion magic box that is the output of Steve's questioning and relentless quizzing of his own, along with others concept of what we know. According to Steven this is something very little. Hermann has demonstrated how this little has been sufficient for us to universally understand, simply simulate and replicate repetitively our knowledge. In their collaboration this has been on a low powered piece of silicon. Having commercialised the RISC into encompassing a large majority share of the mobile chipset market through founding ARM their experience cannot be measured, their expertise are uniquely varied and highly applicable to new advances. 
           It is the aim of this gathering is here to debate the Artificial Intelligence commercialisation. So when we debate AI we assume it is an attempt to understand, simulate and replicate our knowledge and our intelligence. This what we can do.
          Yet there maybe greater meaning, forces and mimicking that we lose when we attempt to understand, simulate and replicate our knowledge and our intelligence. We, maybe can, replicate procedures of intelligence. Is it a cat? It has some feet and looks similar to others so it is. What is a cat? It's a definition. How can we be sure this is not another definition we' ve never heard of? There's no truth and according to Hermann there is no truth or false and there is no certainty. There is just probability. This is a fundamental change from pre designed procedures for every situation to a probability replacing every truth we thought we knew. It is here that Steve's perpetual questioning is so beneficial, something I have been on the other end of. Receiving, most likely automated, e-mails about Steve's University course that there was an open ended questioning session instead of traditional lectures including that all notes were online and any questions unique from the previous year would be answered then. This does highlight our own understanding of truth and false that if we agree something is true we don't question it. So being able to determine every concept, object and action we can therefor ultimately understand, sophisticatedly simulate and rudimentarily replicate our knowledge and intelligence. This is what Alan Turing of imitation game fame and invention of the modern Computer fame defined that if there is a problem that can be deterministic I.e we can determine the answer we can simulate it given the processing power to do it. So for problems with some uncertainty that we cannot know the answer of say stock price in a year or earthquake imminence their is no determinism. So it was established these problems cannot be simulated and our universal unification through understanding that uncertainty was indeed uncertain was ingrained. 
          This is such a fundamental change because Machine learning allows some of these uncertain problems to be solved to a certain accuracy. At a certain accuracy as Steve pointed out it does not matter if you can only be 99% certain something is true. Identify 100 cat's and one a mouse you still know what a cat is or identifying your cat 100 times and assuming its the neighbours cat is yours every three months is forgivable. So exchanging this determinism of true or being false with a probability of true is what Machine learning does. An expert estimate is better than no expert estimate. The expert clarification being the level of accuracy of the probability. This is a sometimes over looked criteria. Identifying a cat is important if you are in a pet shop purchasing a cat because walking out with a parrot when you went to buy a cat is only a benefit when re telling to ones grand kids much later on. Knowing if the cat maybe could get a disease with a 50 % probability can be a benefit to know to buy insurance. So just replacing true with a probability of true it's important to recall we need to know how true sometimes. It's also important to recall something is never true and just an assumption that is a convenient truth. It gets to an interesting question that if we can determine uncertainty so we can ignore it and understand, simulate and replicate this intelligence in ML that ignore uncertain or do we want an AI to remove all uncertainty. We discovered that we can automate, evaluate and repeat problems that were deterministic. We could potential repeat until all uncertainty is removed. This is very much a future question. It does highlight Steve claim we haven't moved on much. We can just imitate recognition but there are much higher levels of intelligence that operate in high levels of uncertainty, ignore it and remove it. 

    These could be a benefit although what we have is already a benefit to lots.

    This topic of uncertainty highlights the uncertainty in their joint ventures of that it was uncertain in ARM Holdings what advancement was going to be the success. 
The initial collaborations start was with Acorn Computers which had success and fame from BBC micro in education. Ultimately there were some uncertainties that could not be ignored. There are debates over what these were and not completely described by a massive drop in the market, company acquisition deal and large overheads. According to another article the value from Acorn was the basis for three titans Acorn, Apple and VSLI that went on to fund a new entity. That value that had given the earlier success and was still there but not visible was ARM which was the RISC chipset that powered the processing referred to by CPU which is in every PC and mobile device. This was the new entity called ARM Holdings a reverse take over of Acorn. The ARM RISC chipset was in a joint venture with Apple which had shares in ARM and used its RISC chipset. When apple was in decline and Steve Job returned it is detailed in Walter Isaacson's book on Steve Jobs that it was the selling of ARM stock in the late 90's that helped give the extra funds for the reinvention of Apple. 
Acorn could not be saved from its uncertainties and the third titan VSLI used their RISC chipset and had success. 
ARM Holdings in the early 90's only had 12 employees, a revolutionary product and bitter experience. 
Its from here Hermann with his still obvious extonishment of the hidden simplicity in the business model success explain similar to many of it contemporaries ARM is not what it's seems. He states his favourite line "we have never sold a RISC chipset. ARM Holdings do offer licenses although. " A small company had no chance to make a chipset and commercialise it even if it had design that were highly advanced. It is here that their collaboration becomes so apparent making the value, finding the most opportunity in this value, finding the market for it and approaching this market with an optimised offering. Steve managed to create this value many times. Hermann knew the Market and how to approach it and stay in it.
        Their individual success is actually very similar and can by stated by less for more. Steve squeezed more processing for less power and size. Hermann gave less of the value in just a license not a product for more profit. This is a lesson for all in a large industry and one dominated by large players. There were chipsets creator who were very effective and advanced in doing that and ARM Holding allowed them to do it more efficiently and gain new advancements. Where it's competitors i.e. Intel relied on creating chipsets themselves. It may not have been a large disadvantage for Intel to produce their own but by selling a license instead of product allowed a small company be successful. It is claimed that their collaboration did not schedule its approach to market or management. The failure of Acorn highlights this although it was not critical, better to make industry ventures, evolve for the market and ultimately it was evolved when the market demand was there for RISC licenses.  The success of understanding the chipset main value in mobile devices was its largest advancement and meant it's almost immediate roll out by Texas Instruments in Nokia devices which dominated demand at the beginning of the mobile era and led to 95% of the RISC chipset market share being dominated by ARM Holdings and poised to continue this. 
    This is some way from the latest ML the topic of the debate. A RISC chipset just can do a few procedures maybe calculate your weekly spending you have. Although this RISC chipset has changed and ML is not just about doing many procedures. This is because when many of these RISC chipsets are used and in a structure or network inspired by biology it can do ML much easier. This is demonstrated by Steve 's research project Spinnaker which is replicating 1 % of the brains power and neurones. 
   Hermann explains that ML is a change a significant because it makes new business models. It does stuff once done by someone can now be automated. ML according to Hermann is significant because it has new business models and can be done in so many more industries. The automation of any recognition from behaviour in Uber by FiveAI a company linked to Hermann to object recognition. Hermann correctly identifies that when this happened before there have been more job creation than losses although those who lost jobs were not those who gained the new jobs. This was again highlighted by a question if we are preparing for this transformation with Steve's answer a simple No. Despite the apparent certainty of ML and its advantages the there still lots uncertainty that's not known. Hermann agrees and describes the perpetual ability of humans to adapt to this uncertainty that makes it possible to incorporate ML to our advantage. 

So can ML be used to reduce and maybe remove uncertainty i.e predict everything. Can hardware be biologically inspired similar to the Spinnaker project that Steve is leading. Can FiveAI reduce the margin in the probabilities so to automate decisions to a high enough level on decisions that do not matter to us so we can reduce our uncertainty. Maybe quantum computing devices can cope with the reducing of uncertain can we cope with not having the uncertainty. 

The answers to these are very uncertain. 
      

There of course are lessons to learn other than uncertain reduce is uncertain from this debate and it's panel. That this collaboration is heavily routed in one city Cambridge. That the success of something is very delayed. There are many giants which dominate sectors but innovation helps to avoid competition and conflict with them. Identifying the value in your offering is critical and giving less of it for more gives longevity. This is displayed by ARM Holdings 450 clients purchasing licenses giving more revenue to ARM than all of intel's revenue in total. The giant usually have some intrinsic method that defines them or is so incorporated they often choose to continue with it through brute force. Innovation usually cannot break this. This intrinsic method is their limitation, degradation or continuation of this giant. Innovation can move around this, facilitate and have other business models to become bigger than the giant. 

No comments:

Post a Comment