Where will artificial intelligence lead us in the coming years? Will we all be served by robots?

I have given the following speech at a dinner at Middle Temple, in the Queen’s Room, on 27 May 2016, to which I have been invited by the Great British Business Alliance.

IMG_0591

Where will artificial intelligence lead us in the coming years? Will we all be served by robots?

Will my vacuum-cleaning robot, Berta, use my credit card to pay its holidays?

These are questions I hear often lately.

Well, maybe not the last one.

Professor Stephen Hawking warns us that artificial intelligence could end mankind.

“The development of full artificial intelligence could spell the end of the human race,” he told the BBC in December 2014. “[AI] would take off on its own, and re-design itself at an ever increasing rate,” he said. “Humans, who are limited by slow biological evolution, could not compete and would be superseded.”

A gloomy outlook.

The industrial revolution brought new manufacturing processes, a change from hand production methods to machines, new chemical manufacturing and iron production processes based on water power and then steam power. The societal change was profound. Almost every aspect of daily life was affected in some way. Physical labour in many ways was replaced.

Accommodating these changes, though, the adaptation of society took place over several generations.

Moving from production to service economy in the 20th century was a painful process. Unemployment was high at times. The workforce needed was and is, at times, not the workforce available. Some part of the workforce can be retrained, many workers need to change career paths altogether, and many go into long-term unemployment.

But still, the societal change had time to take place over several generations.

We are still adapting – Tata Steel selling its UK plants in England and Wales puts 4000 jobs at risk and the local economy. The UK Government Digital Strategy aims at redesigning its digital services to bring better services to the public — and save about £1.8 billion each year. This will not free civil servants from repetitive tasks to work on harder problems and improve their services. It will free many of them of their jobs.

A strategic decision.

A responsible one?

The societal change from information to knowledge society now brings a new threat. Mental work; knowledge work, and such services that civil servants provide are coming under threat. And there is no place for them to go, is there?

According to Russel and Norvig’s standard textbook, “Artificial Intelligence is the intelligence exhibited by machines. In computer science, an ideal ‘intelligent’ machine is a flexible rational agent that perceives its environment and takes actions that maximise its chance of success at an arbitrary goal.”

You could apply the term ‘artificial intelligence’ whenever cutting-edge techniques are used by a machine to competently perform or mimic ‘cognitive’ functions that we intuitively associate with human minds, such as learning and problem solving. If the machine performs tasks associated with a certain mystique, it is perceived as magic. Optical character recognition or route planning are no longer perceived as AI. It is just an everyday technology today. Chess playing programmes have lost its mystique. And Go is on its way to being demystified as the AlphaGo system has beaten professional players. Robots such as self-driving cars are probably closest to what we associate with artificial intelligence.

The field, artificial intelligence, was founded at a conference of Dartmouth College in 1956. The founders of the field, including John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon were quite optimistic about the success of AI. To quote Herbert Simon, “machines will be capable, within twenty years, of doing any work a man can do.”

Well … Maybe Herbert Simon did not mean ‘Earth years’?

In any case: How do you measure success in AI? When is a machine capable of doing any work a man — sorry — a human can do?

This question had been addressed before the legendary Dartmouth conference by Alan Turing, the mathematical genius who not only broke the Enigma code during WWII but formalised the concepts of algorithm and computation, called the Turing machine, a model of a general purpose computer. Alan Turing is widely considered to be the father of artificial intelligence.

In a journal article on Computing Machinery and Intelligence Alan Turing discussed the questions whether a computer could think. He had no doubt. And he put forward the idea of an ‘imitation game’. If a human and a machine would be interrogated in a way that the interrogator does not know who is the human and who is the machine, and the interrogator cannot distinguish between them by questioning, then it would be unreasonable not to call the computer intelligent.

Competitions around the world exist where you can chat with a communication partner via text messages and you have to find out whether you talk to a human or a machine. – By the way, we use Captcha checks to ascertain that we are humans on the Web every day while teaching ‘the machine’ what characters we recognise in a picture of a text snippet. –

In his paper, Turing makes a similar claim that Stephen Hawking makes: “machine intelligence could take off once it reached a ‘critical mass’”.

Are we there yet? Not by a long shot.

Let’s go back 30 years.

I finished school in 1986 and went to the army for 15 months before I started studying computer science in 1987; with a focus on artificial intelligence. The German Research Centre for Artificial Intelligence (DFKI) was founded in the same year. DFKI is today’s biggest research centre for AI in the world. The institute was just set up before the second ‘AI winter’ hit academia hard with reduced funding and, generally,  a reduced interest in AI. Not surprisingly so: expectations were inflated and badly managed.

Two ‘AI winters’ – in analogy to nuclear winters – are widely considered. The first period of reduced funding lasted from 1974 to1980. UK research was hit especially hard. Professor Sir James Lighthill reported to the UK Parliament the utter failure of AI to achieve ‘grandiose objectives’. His conclusion: Nothing being done in AI couldn’t be done in other sciences. As a result, AI research was completely dismantled in England.

Similar funding cuts happened throughout the continent and in the United States.

The second AI winter lasted from 1987 to 1993. Again AI research could not deliver what was expected by industry, governments, and the public.

So, what was the world like thirty years ago, back in 1986?

The past: 1986

The Macintosh had stepped on the world stage just two years before; with a graphical user interface and a mouse as input devices. The first ‘personal computer’ as Steve Jobs put it. MacPaint and MacWrite were the first applications bundled with the Macintosh. Microsoft Windows was just one year old and Windows 2.0, the first usable version, a year in the future.

AI at that time?

Object-oriented programming in the form of a language called Smalltalk-80 was a big thing – Smalltalk was developed by Xerox PARC, the Palo Alto Research Centre; the same organisation that developed the graphical user interface paradigm and the mouse, and apparently did not know what to do with it commercially, until Steve Jobs came for a visit. Today’s -programming languages C++, Java, C#, and Swift, for example, are all based on those, same principles — but they are still not as powerful as Smalltalk – from an AI perspective. Other important AI languages at that time were LISP, Scheme, and PROLOG.

The problems we addressed at that time were knowledge representation and planning problems.

Let’s look at two illustrating examples:

We describe the state of a world made of blocks and a table, the Blocks World. Let’s say, we have three blocks. Block 1 sits on the table. Block 2 sits on the table, and Block 3 sits on top of Block 2. The computer is then tasked to build a stack where Block 2 is on top of Block 1 and Block 3 sits on Block 2.

Three-year-olds do not have a problem with finding the following sequence of actions: Remove Block 3 from Block 2; put Block 2 on Block 1; put Block 3 on Block 2.

IMG_0044

Another example, that could give you an idea how difficult it was to deal with real world problems is the mathematical puzzle, Towers of Hanoi.

There are three rods and a number of disks of different sizes which can slide onto a rod. We begin with the disks in a neat stack: The biggest disk is at the bottom, the smallest disk is at the top.

IMG_0045

The objective is to move the entire stack to another rod, following a few simple rules:

  1. Only one disk can be moved at a time.
  2. Each move consists of taking the topmost disk from one of the stacks and placing it on top of another stack, meaning: a disk can only be moved if it is at the top of a stack.
  3. No disk may be placed on top of a smaller disk.

What were the main difficulties?

How do you represent the state of the world? How much detail do you need to provide? What are the essential relationships that you need to model?

This representation problem is called the frame problem.

And, in the case of the Blocks World: When we leave the theoretical descriptions where only such objects as a table and blocks exist, and only such operations as ‘grasp’, ‘un-grasp’, and ‘put’ are defined, such questions pop up as: How big is the table? Where do I put a block on that table with the robotic arm? What happens if a block is dropped and falls off the table?

In 1986, AI was still in its infancy. All development was insular. Specialised hardware was available to speed up the execution of programs written in such AI languages as LISP. But the processing power was just not up for the task.

Representing knowledge was a difficult process as well. Re-use of represented knowledge was a huge problem. The major limiting factor for quicker development and exchange of algorithms and represented knowledge were missing standards.

The past: 2001

Fast forward 15 years to 2001.

No Space Odyssey.

No alien artefact.

No HAL 9000.

We have just ‘survived’ the ‘year 2000 problem’ and cured the ‘Millenium bug’.

The year 2000 marked an important first milestone for ubiquitous computing and the Internet of Things.

In 2000, the first phone marketed as a smartphone, the Ericsson R380 Smartphone, became available. A smartphone is a device combining features of a mobile phone with those of other mobile devices such as personal digital assistants. It has to be said though that the first smartphone, Simon, had been developed by IBM already in 1994. Simon had a touch screen (and a pen) and already ran such apps as Email, Fax, Address Book, Calculator, Calendar, Notepad, Sketchpad, and To Do.

In 2000, the first camera phones were sold. I got my first camera, the MCA-25, in 2002. It was plugged into my Sony Ericsson T300 and was a separate accessory. Resolution: 300,000 pixels. A far cry from today’s resolution. The phone could store 147 pictures.

A lot of software solutions based on AI research have become successful. Due to the second AI winter, solutions were rather called ‘smart’. The term ‘artificial intelligence’ was to be avoided at all costs. I worked at a University-spinout company at the turn of the millennium finishing my Ph.D. Whenever we went to Daimler, Audi, or Siemens deciders did not want to hear a word about AI. AI equaled failure at that time.

„Many observers,“ Ray Kurzweil wrote in 2005, „still think that the [last] AI winter was the end of the story and that nothing since has come off the AI field, yet today many thousands of AI applications are deeply embedded in the infrastructure of every industry.”

For example, route planning and GPS navigation started to become widely available. In 2002, TomTom released its first route planning product, TomTom Navigator, on a handheld device.

The present

Since 2007, the iPhone shook up the mobile phone market, and the PDA market, and the media player market. iOS and Android devices helped extend the compact camera systems market — and the games market. Apple’s App Store and Google Play bring new business opportunities. Daily.

Social software emerged with a vengeance.

Facebook has been around twelve years now. It has amassed 1.65 billion monthly active users [1]. Twitter, founded ten years ago, has 300 million monthly active users [2]. Google Plus, Instagram, Snapchat, etc., etc. All of them provide opportunities for data collection on an unprecedented scale.

Search engines, on the other hand, guide us to lots of information. Knowing our search history, Google helps us digging through the Internet.

Wolfram Alpha, a ‘computational knowledge engine’, uses its own knowledge base to answer specific questions, for example, about mathematics or people and history. It is a question answering system in contrast to Google’s Internet search.

Steven Wolfram, by the way, is a British scientist.

Siri, Apple’s voice-controlled personal assistant interprets spoken natural language for fact finding and searching. Among the search engines are Google and Wolfram Alpha. Microsoft calls its own version Cortana. And Google has Voice Search.

Programming frameworks such as Microsoft’s Bot Framework enable the development of competent chat bots in a short time and on a larger scale. Ask questions in natural language using spoken language, Skype chats or Facebook Messenger.

Mobile devices are powerful as never before: Mobile phones, tablets, and wearable technology such as the Apple Watch, or activity trackers developed by Fitbit or Jawbone are delivering services everywhere. You want to know about your sleep patterns and improve your rest? Get the appropriate device and app.

Those powerful devices are partnered with high-speed networks. Always on, always connected is already a reality for many of us, even though bandwidth is still an issue.

Artificial Intelligence today is called Artificial Intelligence again. The Internet of Things, the Web of sensors and connected devices, is already shaping up.

I already mentioned chess playing software and AlphaGo

The future: 2031

Jumping forward fifteen years. We are in 2031.

It is safe to say that computational power will have increased immensely. Moore’s Law — which predicts that the numbers of transistors on a chip roughly doubles every two years — will see to it.

Wireless and mobile networks will have merged. We will always be on. Our data will all live in the cloud, on servers and not on our devices.

Our interactions on social networks will intertwine with the real world even more. Personalisation and decision support will be part of our business and personal life.

Medical services, for example, will take into account actual data from us. And, we will be better-informed patients thanks to health apps. Sensors will deliver information around the clock. Through the Internet of Things, ubiquitous computing, has become reality.

The future: 2046

Jumping forward another 15 years.

The famous Greek aphorism or maxim, ‘Gnothi seauton’, know thyself, is, in one interpretation, the warning to pay no attention to the opinion of the multitude. By 2046 the multitude may very well know more about you than you do. This offers an opportunity to learn about ourselves, though. What do the digital traces that we leave behind tell about us? Our behaviour? Our ethics? Our moral standards?

The personal assistant we all crave to have might be there as a companion app. We might even all have a team of AIs working with us on our daily problems. Our health app might argue with the career app about a healthy balance of work and rest when discussing the next set of meetings? The health app will be supported by lots of health data: exercise, caloric intake and burn, the variety of our diet. The apps will learn from interacting with us, judging how we take advice, how to approach us differently if we make unwise decisions. I imagine versions of family and close friends. Just based on facts and not just on good intentions and the latest story from such magazines as Health and Fitness or Yoga. Apps will interpret and act upon data acquired from the Internet of Things.

We will be able to discuss our decision making based on live facts and hard evidence. We can ask for justifications, the relevance to the current situation and make predictions and simulate outcomes. We will be able to easily include relevant experts in the decision making.

Wrapping up

Let’s come back to the initial questions: Where will artificial intelligence lead us in the coming years? Will we all be served by robots?

Elon Musk, the billionaire founder of SpaceX and Tesla has compared developments in AI to ‘summoning demons’. Just like Stephen Hawking, he sees a threat in those developments and the possible end of the human race. With Moore’s Law being one of the main drivers of the recent development.

Luciano Floridi, professor in philosophy and ethics of information at the University of Oxford, does not think so. “No AI version of Godzilla is about to enslave us, so we should stop worrying about science fiction and start focusing on the actual challenges that AI poses,” Professor Floridi says. “Moore’s law is a measure of computational power, not intelligence.”

“My vacuum-cleaning robot,” Floridi continues, “will clean the floor quickly and cheaply and increasingly well, but it will never book a holiday for itself with my credit card.” And, „Anxieties about super-intelligent machines are, therefore, scientifically unjustified.”

Artificial intelligence, like any other technology, will not lead us. We need to take the lead and decide what to make of this technology. There are plenty of opportunities and useful applications as I have pointed out.

Developing cognitive abilities within machines will have dramatic consequences, though. Moving from manual labour to machine-powered production set people free to get a better education, to create other types of businesses. But where do we go when – not if –  machine intelligence replaces accountants, administrators, middle management?

In the Science Fiction universe of Frank Herbert’s Dune a war with highly developed machines nearly wiped out mankind. The survivors made a drastic decision; they vowed to never develop machine intelligence again, turning to genetic engineering to ‘evolve’ humans.

It is us [here in the room] who have to make up their minds about the kind of society we want to live in. Maybe one in which Berta, my nosy, gossiping vacuum-cleaning robot, takes a holiday now and then.

References

[1] http://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/

[2] http://www.statista.com/statistics/282087/number-of-monthly-active-twitter-users/

Werbung

Artificial Intelligence: Case-based reasoning for recommender systems

Last year I gave an invited talk at Techsylvania 2015 in Cluj, Romania, on Case-based Reasoning for Recommender Systems. You can find my slides here (including a video recording of my talk) and the video separately on youtube here.

AI - Using CBR to build Recommender Systems

Tutorial: myCBR and Colibri Studio at AI 2012 Conference

From CBR researchers and students we learned that Colibri Studio and myCBR are perceived as competitors. But nothing is further from the truth. Colibri Sudio and myCBR complement each other. At ICCBR 2012 we gave tutorials on our open source tools (announced here) together and decided in discussions afterwards to further promote their interoperability at other venues.

We are happy to showcase myCBR and Colibri Studio at AI 2012, the thirty-second Annual International Conference of the British Computer Society’s Specialist Group on Artificial Intelligence (SGAI), which will be held in the attractive surroundings of Peterhouse College in Cambridge. The introductory tutorial will be given as part of the main conference by Christian Sauer, University of West London, and Dr Juan Antonio Recio García, Complutense University of Madrid.

Additionally, there will be a talk about myCBR and Colibri Studio at the 17th UK Workshop on Case-Based Reasoning (UKCBR 2012).

You can register online here.

The BCS Specialist Group on Artificial Intelligence

The Specialist Group on Artificial Intelligence (SGAI) of the BCS, The Chartered Institute for IT, is one of the leading AI societies in Europe. SGAI is very active in promoting the topic of artificial intelligence by organising various events such as the annual international AI conference series in Cambridge. I am happy to be now a (co-opted) member of the SGAI committee. This gives me the opportunity to raise the profile of the School of Computing and Technology at the University of West London and insights in upcoming activities such as the following at the BCS London Office (near Covent Garden):

  • Real AI Day: An event designed to showcase practical applications of artificial intelligence. Friday,  5 October 2012, 9am to 5pm
  • BCS Machine Intelligence Competition
    Friday,  5 October 2012, 6pm to 8pm
  • One-day conference on Knowledge Discovery in Databases (UK KDD). Friday, 19 October 2012, 9am to 5pm.
UK KDD will be co-organised by Miltos Petridis, Dan Neagu, Max Bremer, and myself. Check out the website for more details.

Report: 12th Social Study of ICT workshop (SSIT12)

Health Information Systems: Searching the Past – Finding a Future

Hosted by the London School of Economics on 18 April 2012, the 12th Social Study of ICT workshop (SSIT12) looked at the past and the future of Healthcare Information Technology (HIT). The workshop series is organised by the Information Systems and Innovation Group.

The keynote speakers focused on such questions as „how helpful is information technology for patients, practice, or payers?“ and „the important role of ‘open’“.  Both speakers, Ross Koppel, University of Pennsylvania, and Bill Aylward, Moorfields Eye Hospital NHS Trust, highlighted the problem of closed systems and the feeling of being held hostage by HIT vendors.

Ross Koppel gave a lot of examples of bad UI design of healthcare information systems with sometimes deadly consequences, e.g., when the dosage is calculated wrongly. He showed how people work around software issues with again sometimes bad consequences for patients.  Bill Aylward then focused on ideas of openness and transparency in open source development and bug tracking as a way of dealing with quality issues. Developers and HIT users are often very far apart during software development. Open Eyes shows how to bring them closer together in an open source project.

For Bill Aylward HIT should be more like air traffic control software with problem-focussed user interfaces and swift response times. HIT instead has its data all over the place which requires its users to wait 2-6 minutes in average for just opening a patient record. His vision: an ecosystem of apps like on iOS devices such as the iPhone where data is shared but apps are independent.

The other speakers explored the „consequences of using electronic patient records in diverse clinical settings“ (Maryam Ficociello, Simon Fraser University), viewed „evaluation as a multi-ontological endeavour“ (Ela Klecun, LSE), and took us on a „Journey to DOR: A Retro Science-Fiction Story on researching ePrescribing“ (Valentina Lichtner, City University).  The last session closed with talks on „Real People, Novel Futures, Durable Presents“ (Margunn Aanestad, University of Oslo) and „Awaiting an Information Revolution“ (Amir Takian, Brunel University).

The speakers provided lots of evidence for the need of software that can explain (at least some of) the design rationale of the software engineer in order to bridge the gap between software engineer and user. Bringing them together like in the Open Eyes project is one way of dealing with the issue. But not all users can be included in the development. New users will not know about the design rationale and will not have access to the respective software engineers. This is where explanation-aware software design (EASD) comes into play. EASD aims at making software systems smarter in interactions with their users by providing such information as background information, justifications, provenance information.

Workshop programme

Report: CPHC conference and BCS symposium 2012

The annual Council of Professors and Heads of Computing (CPHC) conference and the annual research symposium of the BCS Academy from 10 to 12 April 2012 provided me with a lot of new insights in current issues and the structure of UK’s (computing) academia. (Thanks, Miltos, for your many explanations and your patience!) The event was hosted by the University of York.

A current hot topic for CPHC and BCS is Computing at School (CAS). Simon Humphreys, coordinator of the CAS initiative, and Simon Peyton-Jones, Chair of the board of members, reported on current developments. The working group is a grassroots movement supporting teachers of computing and ICT. It has currently  more than 1300 members, growing weekly, and is about to launch a computer science teachers association. Only recently the UK government has understood the difference between ICT and Computing and that learning how to use Excel etc. is not enough. This is quite similar to the situation in Germany where the German Informatics Society (GI e. V.) tried to make government (and journalists too, btw) learn this distinction for many years. Now, we, the computer science researchers and lecturers, can tell government what we expect students to know when they come to university. The problem is — and the discussion showed that quickly — we do not know yet what we deem essential, even though we agree on coding to be as an important basic skill as, for example, knowing basics in chemistry, physics, and mathematics. As the UK government wants to make changes effective already in September we need to act quickly. As universities can and should support schools in coming up with a curriculum, CAS is looking for partnering universities to help schools make better informed decisions on what and how to teach our subject.

➔  Todo: Check UWL’s involvement in local school development.

CPHC conference programme (pdf)BCS Academy Symposium programme (pdf)

Towards developing a ‚research culture‘

The University of West London has put research prominently on its development agenda. Having been a teaching-oriented institution for quite some time and now making an effort to become also renowned for research exerts considerable pressure on UWL’s staff and staff development. Even though the School of Computing and Technology already fared well regarding research it faces new challenges.

Last Friday I organised a Research Day to discuss these issues amongst ourselves and with the Interim Pro-Vice Chancellor (Research & Enterprise). The main topic of the Research Day was to discuss how to reconcile teaching and research. As such an event was a bit overdue most of the day was spent on discussing problems and less on thinking about new research venues, as originally planned. It for sure was a very educational and enlightening event for me regarding such things as the current organisation culture, processes, and my colleagues self-conception. And I do not mean that in a negative way.

The main outcome of the Research Day is as simple as that: a better understanding of the status quo. Lots of issues are now on the table and can be addressed. It is also clearer now who wants to be more active in research but needs to be relieved from some teaching duties, e.g., by employment of additional personnel. A closer look at these issues and opportunities over the next weeks and further events like this one will help us develop and move to a new organisation culture where everyone is involved in scholarly activity.

Report: SciTech 2011 – Innovation UK

The SciTech 2011 conference at The Barbican conference centre in London was quite an interesting introduction to UK R&D and innovation. Especially the speakers in the morning session gave me a lot to think and learn (more) about.

Imran Khan—Director, Campaign for Science and Engineering in the UK—took a look at growth opportunities for UK High Tech exports with a focus on the BRIC nations. He stressed that High Tech companies rely on PhDs and that they know it. He made the point at the end of his talk that he thinks

„the sales of airwave spectrum for 4G telephony is science and engineering money and should be spent in science and engineering“.

Catherine Coates—Business Innovation Director, The Engineering and Phyiscal Sciences Research Council (EPSRC)—presented facts and figures on how research is funded by EPSRC. She pointed out that EPSRC wants to „make the UK the most dynamic and stimulating environment for research and innovation in the world“Some tools for that are Centres of excellence, EPSRC centres, Centres for doctoral training, and Industrial Doctorate centres (19 IDCs are currently funded). The UK, I learned, is the „most productive country in terms of citations achieved per £ invested“. EPSRC’s strategic goals: delivering impact, shaping capability, and developing leaders.

Stian Westlake—Policy and Research, National Endowment for Science, Technology and the Arts (NESTA)—presented a plan for innovation. In his view measures need to put in place regarding „research funding, procurement for innovation, access to finance, education, immigration, evidence-based policy, making Europe a true single market for services“. The plan comprises only policies. It still leaves out politics. Here,

„we need to make politicians implement changes“

by showing them ways to gain something for themselves, thinking also in their time frames of four to five years. Politicians at the moment very well grasp that research and development is key to innovation, but they also need ways to implement this.

Mike Short—President of the Institution of Engineering and Technology—concluded the morning session. He focussed on mobile phones and the Internet being the drivers of innovation of the last years. He sees three waves of „mobile“: connecting people, connecting people to the Internet, connecting everything. Spot on, I’d say.

The next session comprised a set of master classes. I selected first „Engineering global biological solutions — a knowledge transfer continuum“. Prof Nigel Titchener-Hooker and Dr Karen Smith, both University College London, reported on their knowledge transfer work in biochemical engineering. They described the impressive work of the UCL Advanced Centre for Biochemical Engineering, available degrees (and how the biochemical degrees are complemented by engineering courses to accomodate the needs of the biochemical engineering industry). The Centre not only focusses on education for industry needs but also provides training for senior leaders in bioprocessing industry (e.g., in 3-day courses). The Centre has a 12 strong advisory board of international caliber. A (printed) newsletter is sent out regularly to more than 5,000 subsrcibers to keep the community informed. Intership placement provide job opportunities and knowledge transfer.

One of the remarks I found notable was a clear statement on what the Centre is not doing:

„We don’t do contract research.“

Prof Titchener-Hooker described such research as short-sighted (as money would be the only outcome of it) and not REF-relevant (as typically no publications about respective research are allowed).

In the second master class, Dr Clive Edmonds—Chief Executive Officer, Scienta Group—and a colleague (forgot to note the name, sorry) gave a talk on „Innovation and Commercialisation: engines for growth“. Dr Edmonds promised right at the start that he would not tell us anything new. He kept his word, but he also reminded us about a lot of things in a very good presentation such as:

  • To innovate is not only a verb but a mindset.
  • „Innovation means you make money from it.“
  • Innovation projects need: passion, purpose (a clear business objective), and pragmatism (dynamic approach to and drive of the project)
  • typically 60% of total profit come from 14% of breakout innovation (of course, risk is much greater than with incremental development)
  • Motivation + Creative Thinking + Expertise are needed for innovation. Not necessarily to be found in one person, but in a team.
  • True innovation takes place on the edge of chaos.
  • No success in innovation without having innovation culture!

Innovation (as well as creativity) needs the right environment to flourish in – in companies as well as in universities I might add. I wonder about Scienta Group giving a talk on innovation and growth at university. Hm …

The afternoon session was not as interesting to me than the morning one. You might find the list of „Ten thoughts that will change the world next“ collected by Jheni Osman—host of the conference—of interest:

  • 3d printing (Sir James Dyson)
  • Quantum computers (Iain Lobban, GCHQ)
  • Ubiquitous computing (Michail Bletsas, MIT)
  • Mood-sensing TV (Dan Heaf, BBC)
  • Biomechatronics (Lesley Gavin, BT)
  • Cancer-busting beams (Steve Myers)
  • Biochar[coal] (Prof Tim Flannery)
  • Protocells (Rachel Armstrong)
  • Anti-ageing tech (Aubrey de Grey)
  • Conscious-o-meter (Prof Marcus de Sautoy)

The final talks were given by Dr Malcolm Parry—Chairman of The UK Science Park Association and MD of Surrey Research Park—on „Science Parks: Bringing a new knowledge domain to research“, followed by Andrew Miller—MP, Chair, Science and Technology Select Committee—on „The future of UK science“, and, finally, Prof Steve Caddick—Vice-Provost (Enterprise), UCL—on „University-business collaboration: driving innovation and growth“. Prof Caddick repeated some of the points already made by Prof Titchener-Hooker in his master class, albeit now on a university level. (You may want to have a look at the four grand challenges UCL identified: global health, human well-being, sustainable cities, and intercultural interactions.)

So, what do I retain from the conference:

There is quite a lot we, the University of West London, can learn from UCL. Granted, they are a much bigger university, but nevertheless. I also think we—as in we at the School of Computing and Technology – are doing quite a lot quite well already.

Second, from Scienta Group I take with me the need to have an innovation culture in place, at the level of the Centre of Model-based Software Engineering and Explanation-aware Computing, the School of Computing and Technology, and the university.

Report: 7th International and Interdisciplinary Conference on Modelling and Using Context (CONTEXT 2011)

After a four years hiatus the CONTEXT conference series came back to life and presented itself as professional and ambitious as ever. The Seventh International and Interdisciplinary Conference on Modeling and Using Context (CONTEXT) took place in Karlsruhe, Germany. It brought together researchers and practitioners from a wide range of disciplines.  Thanks to Prof. Michael Beigl and his team — most notably Dr. Hedda Schmidtke — the conference turned out to be a great event. About 70 participants enjoyed the conference venue, the Karlsruhe Institute of Technology. Most of the participants attended nearly every talk. This shows how well the conference was received and how interesting the sometimes not so easy to follow talks from different fields of one’s own research were. The conference dinner at the Centre for Art and Media (ZKM) was a highlight of the event.

Three invited talks marked the milestones of the main conference. On Wednesday morning, Jerry Hobbs reflected on „Discourse Interpretation in Context“. The second keynote was given by Ruth Kempson, King’s College, London, on „Ellipsis in Conversational Dialogue“. Even though the last invited talk was on the morning after the conference dinner, Paul Holleis had many listeners for his  talk on „Explicit, Generic, and Social Context“.

From the three workshops before the main conference I attended the workshop on Modelling and Reasoning in Context (MRC). The one and a half days workshop was a lively event that I enjoyed very much. In between the eight presentations two panel discussions and an open discussion gave lots of opportunities to look at context from different angles.

The CONTEXT community decided to start a wiki to collect information about the topic of context and the people working on it. Stay tuned. More information coming up soon.

6th Workshop on Explanation-aware Computing ExaCt 2011, Barcelona, Spain

The workshop series aims to draw on multiple perspectives on explanation, to examine how explanation can be applied to further the development of robust and dependable systems and to illuminate system processes to increase user acceptance and feeling of control.

… Models and knowledge representations for explanations Quality of explanations; understandability Integrating application and explanation knowledge Explanation-awareness in (designing) applications Methodologies for developing explanation-aware systems Explanations and learning Context-aware explanation vs. explanation-aware context Confidence and explanations Privacy, trust, and explanation Empirical studies of explanations Requirements and needs for explanations to support human understanding Explanation of complex, autonomous systems Co-operative explanation Visualising explanations Dialogue management and natural language generation Human-Computer Interaction (HCI) and explanation Important dates (not finalised yet) :

201011281537.jpg

Both within AI systems and in interactive systems, the ability to explain reasoning processes and results can substantially affect system usability. For example, in recommender systems good explanations may help to inspire user trust and loyalty, increase satisfaction, make it quicker and easier for users to find what they want, and persuade them to try or buy a recommended item.

The workshop series aims to draw on multiple perspectives on explanation, to examine how explanation can be applied to further the development of robust and dependable systems and to illuminate system processes to increase user acceptance and feeling of control. ExaCt 2011 will be held in Barcelona, Spain, in conjunction with 22nd International Joint Conference on Artificial Intelligence IJCAI-11.

Suggested topics for contributions (not restricted to IT views):

  • Models and knowledge representations for explanations
  • Quality of explanations; understandability
  • Integrating application and explanation knowledge
  • Explanation-awareness in (designing) applications
  • Methodologies for developing explanation-aware systems
  • Explanations and learning
  • Context-aware explanation vs. explanation-aware context
  • Confidence and explanations
  • Privacy, trust, and explanation
  • Empirical studies of explanations
  • Requirements and needs for explanations to support human understanding
  • Explanation of complex, autonomous systems
  • Co-operative explanation
  • Visualising explanations
  • Dialogue management and natural language generation
  • Human-Computer Interaction (HCI) and explanation

Important dates (not finalised yet):

  • Workshop paper submission deadline: March 2011
  • Notification of workshop paper acceptance: April, 2011
  • Camera-ready copy submission: May 2011
  • Workshop (two days): July 16-18, 2011

Read the complete call for papers on the workshop website.

%d Bloggern gefällt das: