Where will artificial intelligence lead us in the coming years? Will we all be served by robots?

I have given the following speech at a dinner at Middle Temple, in the Queen’s Room, on 27 May 2016, to which I have been invited by the Great British Business Alliance.

IMG_0591

Where will artificial intelligence lead us in the coming years? Will we all be served by robots?

Will my vacuum-cleaning robot, Berta, use my credit card to pay its holidays?

These are questions I hear often lately.

Well, maybe not the last one.

Professor Stephen Hawking warns us that artificial intelligence could end mankind.

“The development of full artificial intelligence could spell the end of the human race,” he told the BBC in December 2014. “[AI] would take off on its own, and re-design itself at an ever increasing rate,” he said. “Humans, who are limited by slow biological evolution, could not compete and would be superseded.”

A gloomy outlook.

The industrial revolution brought new manufacturing processes, a change from hand production methods to machines, new chemical manufacturing and iron production processes based on water power and then steam power. The societal change was profound. Almost every aspect of daily life was affected in some way. Physical labour in many ways was replaced.

Accommodating these changes, though, the adaptation of society took place over several generations.

Moving from production to service economy in the 20th century was a painful process. Unemployment was high at times. The workforce needed was and is, at times, not the workforce available. Some part of the workforce can be retrained, many workers need to change career paths altogether, and many go into long-term unemployment.

But still, the societal change had time to take place over several generations.

We are still adapting – Tata Steel selling its UK plants in England and Wales puts 4000 jobs at risk and the local economy. The UK Government Digital Strategy aims at redesigning its digital services to bring better services to the public — and save about £1.8 billion each year. This will not free civil servants from repetitive tasks to work on harder problems and improve their services. It will free many of them of their jobs.

A strategic decision.

A responsible one?

The societal change from information to knowledge society now brings a new threat. Mental work; knowledge work, and such services that civil servants provide are coming under threat. And there is no place for them to go, is there?

According to Russel and Norvig’s standard textbook, “Artificial Intelligence is the intelligence exhibited by machines. In computer science, an ideal ‘intelligent’ machine is a flexible rational agent that perceives its environment and takes actions that maximise its chance of success at an arbitrary goal.”

You could apply the term ‘artificial intelligence’ whenever cutting-edge techniques are used by a machine to competently perform or mimic ‘cognitive’ functions that we intuitively associate with human minds, such as learning and problem solving. If the machine performs tasks associated with a certain mystique, it is perceived as magic. Optical character recognition or route planning are no longer perceived as AI. It is just an everyday technology today. Chess playing programmes have lost its mystique. And Go is on its way to being demystified as the AlphaGo system has beaten professional players. Robots such as self-driving cars are probably closest to what we associate with artificial intelligence.

The field, artificial intelligence, was founded at a conference of Dartmouth College in 1956. The founders of the field, including John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon were quite optimistic about the success of AI. To quote Herbert Simon, “machines will be capable, within twenty years, of doing any work a man can do.”

Well … Maybe Herbert Simon did not mean ‘Earth years’?

In any case: How do you measure success in AI? When is a machine capable of doing any work a man — sorry — a human can do?

This question had been addressed before the legendary Dartmouth conference by Alan Turing, the mathematical genius who not only broke the Enigma code during WWII but formalised the concepts of algorithm and computation, called the Turing machine, a model of a general purpose computer. Alan Turing is widely considered to be the father of artificial intelligence.

In a journal article on Computing Machinery and Intelligence Alan Turing discussed the questions whether a computer could think. He had no doubt. And he put forward the idea of an ‘imitation game’. If a human and a machine would be interrogated in a way that the interrogator does not know who is the human and who is the machine, and the interrogator cannot distinguish between them by questioning, then it would be unreasonable not to call the computer intelligent.

Competitions around the world exist where you can chat with a communication partner via text messages and you have to find out whether you talk to a human or a machine. – By the way, we use Captcha checks to ascertain that we are humans on the Web every day while teaching ‘the machine’ what characters we recognise in a picture of a text snippet. –

In his paper, Turing makes a similar claim that Stephen Hawking makes: “machine intelligence could take off once it reached a ‘critical mass’”.

Are we there yet? Not by a long shot.

Let’s go back 30 years.

I finished school in 1986 and went to the army for 15 months before I started studying computer science in 1987; with a focus on artificial intelligence. The German Research Centre for Artificial Intelligence (DFKI) was founded in the same year. DFKI is today’s biggest research centre for AI in the world. The institute was just set up before the second ‘AI winter’ hit academia hard with reduced funding and, generally,  a reduced interest in AI. Not surprisingly so: expectations were inflated and badly managed.

Two ‘AI winters’ – in analogy to nuclear winters – are widely considered. The first period of reduced funding lasted from 1974 to1980. UK research was hit especially hard. Professor Sir James Lighthill reported to the UK Parliament the utter failure of AI to achieve ‘grandiose objectives’. His conclusion: Nothing being done in AI couldn’t be done in other sciences. As a result, AI research was completely dismantled in England.

Similar funding cuts happened throughout the continent and in the United States.

The second AI winter lasted from 1987 to 1993. Again AI research could not deliver what was expected by industry, governments, and the public.

So, what was the world like thirty years ago, back in 1986?

The past: 1986

The Macintosh had stepped on the world stage just two years before; with a graphical user interface and a mouse as input devices. The first ‘personal computer’ as Steve Jobs put it. MacPaint and MacWrite were the first applications bundled with the Macintosh. Microsoft Windows was just one year old and Windows 2.0, the first usable version, a year in the future.

AI at that time?

Object-oriented programming in the form of a language called Smalltalk-80 was a big thing – Smalltalk was developed by Xerox PARC, the Palo Alto Research Centre; the same organisation that developed the graphical user interface paradigm and the mouse, and apparently did not know what to do with it commercially, until Steve Jobs came for a visit. Today’s -programming languages C++, Java, C#, and Swift, for example, are all based on those, same principles — but they are still not as powerful as Smalltalk – from an AI perspective. Other important AI languages at that time were LISP, Scheme, and PROLOG.

The problems we addressed at that time were knowledge representation and planning problems.

Let’s look at two illustrating examples:

We describe the state of a world made of blocks and a table, the Blocks World. Let’s say, we have three blocks. Block 1 sits on the table. Block 2 sits on the table, and Block 3 sits on top of Block 2. The computer is then tasked to build a stack where Block 2 is on top of Block 1 and Block 3 sits on Block 2.

Three-year-olds do not have a problem with finding the following sequence of actions: Remove Block 3 from Block 2; put Block 2 on Block 1; put Block 3 on Block 2.

IMG_0044

Another example, that could give you an idea how difficult it was to deal with real world problems is the mathematical puzzle, Towers of Hanoi.

There are three rods and a number of disks of different sizes which can slide onto a rod. We begin with the disks in a neat stack: The biggest disk is at the bottom, the smallest disk is at the top.

IMG_0045

The objective is to move the entire stack to another rod, following a few simple rules:

  1. Only one disk can be moved at a time.
  2. Each move consists of taking the topmost disk from one of the stacks and placing it on top of another stack, meaning: a disk can only be moved if it is at the top of a stack.
  3. No disk may be placed on top of a smaller disk.

What were the main difficulties?

How do you represent the state of the world? How much detail do you need to provide? What are the essential relationships that you need to model?

This representation problem is called the frame problem.

And, in the case of the Blocks World: When we leave the theoretical descriptions where only such objects as a table and blocks exist, and only such operations as ‘grasp’, ‘un-grasp’, and ‘put’ are defined, such questions pop up as: How big is the table? Where do I put a block on that table with the robotic arm? What happens if a block is dropped and falls off the table?

In 1986, AI was still in its infancy. All development was insular. Specialised hardware was available to speed up the execution of programs written in such AI languages as LISP. But the processing power was just not up for the task.

Representing knowledge was a difficult process as well. Re-use of represented knowledge was a huge problem. The major limiting factor for quicker development and exchange of algorithms and represented knowledge were missing standards.

The past: 2001

Fast forward 15 years to 2001.

No Space Odyssey.

No alien artefact.

No HAL 9000.

We have just ‘survived’ the ‘year 2000 problem’ and cured the ‘Millenium bug’.

The year 2000 marked an important first milestone for ubiquitous computing and the Internet of Things.

In 2000, the first phone marketed as a smartphone, the Ericsson R380 Smartphone, became available. A smartphone is a device combining features of a mobile phone with those of other mobile devices such as personal digital assistants. It has to be said though that the first smartphone, Simon, had been developed by IBM already in 1994. Simon had a touch screen (and a pen) and already ran such apps as Email, Fax, Address Book, Calculator, Calendar, Notepad, Sketchpad, and To Do.

In 2000, the first camera phones were sold. I got my first camera, the MCA-25, in 2002. It was plugged into my Sony Ericsson T300 and was a separate accessory. Resolution: 300,000 pixels. A far cry from today’s resolution. The phone could store 147 pictures.

A lot of software solutions based on AI research have become successful. Due to the second AI winter, solutions were rather called ‘smart’. The term ‘artificial intelligence’ was to be avoided at all costs. I worked at a University-spinout company at the turn of the millennium finishing my Ph.D. Whenever we went to Daimler, Audi, or Siemens deciders did not want to hear a word about AI. AI equaled failure at that time.

„Many observers,“ Ray Kurzweil wrote in 2005, „still think that the [last] AI winter was the end of the story and that nothing since has come off the AI field, yet today many thousands of AI applications are deeply embedded in the infrastructure of every industry.”

For example, route planning and GPS navigation started to become widely available. In 2002, TomTom released its first route planning product, TomTom Navigator, on a handheld device.

The present

Since 2007, the iPhone shook up the mobile phone market, and the PDA market, and the media player market. iOS and Android devices helped extend the compact camera systems market — and the games market. Apple’s App Store and Google Play bring new business opportunities. Daily.

Social software emerged with a vengeance.

Facebook has been around twelve years now. It has amassed 1.65 billion monthly active users [1]. Twitter, founded ten years ago, has 300 million monthly active users [2]. Google Plus, Instagram, Snapchat, etc., etc. All of them provide opportunities for data collection on an unprecedented scale.

Search engines, on the other hand, guide us to lots of information. Knowing our search history, Google helps us digging through the Internet.

Wolfram Alpha, a ‘computational knowledge engine’, uses its own knowledge base to answer specific questions, for example, about mathematics or people and history. It is a question answering system in contrast to Google’s Internet search.

Steven Wolfram, by the way, is a British scientist.

Siri, Apple’s voice-controlled personal assistant interprets spoken natural language for fact finding and searching. Among the search engines are Google and Wolfram Alpha. Microsoft calls its own version Cortana. And Google has Voice Search.

Programming frameworks such as Microsoft’s Bot Framework enable the development of competent chat bots in a short time and on a larger scale. Ask questions in natural language using spoken language, Skype chats or Facebook Messenger.

Mobile devices are powerful as never before: Mobile phones, tablets, and wearable technology such as the Apple Watch, or activity trackers developed by Fitbit or Jawbone are delivering services everywhere. You want to know about your sleep patterns and improve your rest? Get the appropriate device and app.

Those powerful devices are partnered with high-speed networks. Always on, always connected is already a reality for many of us, even though bandwidth is still an issue.

Artificial Intelligence today is called Artificial Intelligence again. The Internet of Things, the Web of sensors and connected devices, is already shaping up.

I already mentioned chess playing software and AlphaGo

The future: 2031

Jumping forward fifteen years. We are in 2031.

It is safe to say that computational power will have increased immensely. Moore’s Law — which predicts that the numbers of transistors on a chip roughly doubles every two years — will see to it.

Wireless and mobile networks will have merged. We will always be on. Our data will all live in the cloud, on servers and not on our devices.

Our interactions on social networks will intertwine with the real world even more. Personalisation and decision support will be part of our business and personal life.

Medical services, for example, will take into account actual data from us. And, we will be better-informed patients thanks to health apps. Sensors will deliver information around the clock. Through the Internet of Things, ubiquitous computing, has become reality.

The future: 2046

Jumping forward another 15 years.

The famous Greek aphorism or maxim, ‘Gnothi seauton’, know thyself, is, in one interpretation, the warning to pay no attention to the opinion of the multitude. By 2046 the multitude may very well know more about you than you do. This offers an opportunity to learn about ourselves, though. What do the digital traces that we leave behind tell about us? Our behaviour? Our ethics? Our moral standards?

The personal assistant we all crave to have might be there as a companion app. We might even all have a team of AIs working with us on our daily problems. Our health app might argue with the career app about a healthy balance of work and rest when discussing the next set of meetings? The health app will be supported by lots of health data: exercise, caloric intake and burn, the variety of our diet. The apps will learn from interacting with us, judging how we take advice, how to approach us differently if we make unwise decisions. I imagine versions of family and close friends. Just based on facts and not just on good intentions and the latest story from such magazines as Health and Fitness or Yoga. Apps will interpret and act upon data acquired from the Internet of Things.

We will be able to discuss our decision making based on live facts and hard evidence. We can ask for justifications, the relevance to the current situation and make predictions and simulate outcomes. We will be able to easily include relevant experts in the decision making.

Wrapping up

Let’s come back to the initial questions: Where will artificial intelligence lead us in the coming years? Will we all be served by robots?

Elon Musk, the billionaire founder of SpaceX and Tesla has compared developments in AI to ‘summoning demons’. Just like Stephen Hawking, he sees a threat in those developments and the possible end of the human race. With Moore’s Law being one of the main drivers of the recent development.

Luciano Floridi, professor in philosophy and ethics of information at the University of Oxford, does not think so. “No AI version of Godzilla is about to enslave us, so we should stop worrying about science fiction and start focusing on the actual challenges that AI poses,” Professor Floridi says. “Moore’s law is a measure of computational power, not intelligence.”

“My vacuum-cleaning robot,” Floridi continues, “will clean the floor quickly and cheaply and increasingly well, but it will never book a holiday for itself with my credit card.” And, „Anxieties about super-intelligent machines are, therefore, scientifically unjustified.”

Artificial intelligence, like any other technology, will not lead us. We need to take the lead and decide what to make of this technology. There are plenty of opportunities and useful applications as I have pointed out.

Developing cognitive abilities within machines will have dramatic consequences, though. Moving from manual labour to machine-powered production set people free to get a better education, to create other types of businesses. But where do we go when – not if –  machine intelligence replaces accountants, administrators, middle management?

In the Science Fiction universe of Frank Herbert’s Dune a war with highly developed machines nearly wiped out mankind. The survivors made a drastic decision; they vowed to never develop machine intelligence again, turning to genetic engineering to ‘evolve’ humans.

It is us [here in the room] who have to make up their minds about the kind of society we want to live in. Maybe one in which Berta, my nosy, gossiping vacuum-cleaning robot, takes a holiday now and then.

References

[1] http://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/

[2] http://www.statista.com/statistics/282087/number-of-monthly-active-twitter-users/

Werbung

Creativity is not a talent. It is a way of operating.

Creative people are not different from less creative people. Creative people just find ways to get in a certain mood, in a certain way of operating. They are playful and child-like. They play with ideas and explore them. Most of us know this. In the following video, of which I summarised just some points, John Cleese talks about creativity. It is well worth watching.

At work we function typically in two modes: in an open and a closed mode. In closed mode we act purposeful. We are less humorous, slightly anxious and impatient. We are focused and want to get things done. In open mode we act without specific purpose. We are curious, playful, humorous. We should alternate between the two modes as the open mode helps us solve problems and the closed mode helps us implement solutions.

John Cleese describes five factors that are required to be creative: Space, time, time (sic!), confidence, and humour.

  1. Space: We need a place for ourselves where we are undisturbed.
  2. Time: We need to give ourselves a certain time frame with a certain duration. About 90 min are best as one needs about 30 min too settle down and let go of pressing issues. He discourages longer sessions as we normally need a break after 90 min anyway.
    Set boundaries of space and time for yourself! Separate yourself from regular life and create an „oasis“.
  3. Time: „Play“ with a problem long enough. Give your mind enough time to come up with an original solution. Daydream about the problem at hand. Tolerate discomfort of having not solved a problem longer. Ask yourself always: „When does the decision needs to be taken?“. Do not hasten decision. But be decisive in the end.
  4. Confidence: To play means to experiment. Be open to anything that happens. Don’t be frightened to make errors. You can’t be spontaneous within reason.
  5. Humour: Humour gets us faster from the closed mode to the open mode. Don’t mix „seriousness“ with „solemnity“.

Either technology or magic? I say, magic of technology is what we want, err, don’t we?

You cast a spell, i.e., interact in a specific way with some user interface, and if you waved your magical wand the right way—and only then—you achieve your goal. … I do not say that software developers intentionally engineer obfuscating applications, but from a user’s point of view it often just looks the same.

The recent death of the science fiction author Arthur C. Clarke reminded me of one of my favourite quotes:

„Any sufficiently advanced technology is indistinguishable from magic.“ (Third of Clarke’s three „laws“ of prediction.)

Modern information technology is for many people—and sometimes even for me—indistinguishable from magic. You cast a spell, i.e., interact in a specific way with some user interface, and if you waved your magical wand the right way—and only then—you achieve your goal. Such devices as the Nintendo Wii video game console or the iPhone with their motion detection capabilities allow for completely new interactions with the user and more natural interactions among users.

But there is a big problem with magic: You are required to believe in it and to not ask questions about it. Magic’s dark side is all about hiding, making believe, obscuring, and blinding. I do not say that software developers intentionally engineer obfuscating applications, but from a user’s point of view it often just looks the same.

Of course, there are times where I suspend my disbelief, where I need to suspend my disbelief. Every time I watch a movie or read a novel I am required to do so in order to be entertained. But life is (unfortunately?) not only entertainment. Computer systems need to have beautiful and elegant, easy-to-use interfaces that evoke a sense of wonder like magic does. This just helps keeping your spirits up and makes working more fun. There is no need for dull and boring applications. But interfaces should never be shallow. Systems should help people learn about what is going on in a software system if people want to know. Systems should provide transparency when asked for. Systems should explain their used vocabulary when asked about.

So, my dear developers, keep on providing magical interfaces, but let the interested users have a peek behind the curtains whenever they want.

Explanation opportunities at the book-seller’s

I just had a longer discussion with my wife about information systems she uses at work. My wife is working part-time as a book-seller for many years now and is often frustrated about the different capabilities of the search and ordering tools of different distributors. The systems have different user interfaces and different search algorithms. Especially the latter are the source of her frustration. Even though certain books are available from the distributors, the search engines produce different results. The book-seller has to know what he or she is searching for to find it. An idiocy par excellence.

If those search systems would provide a means to inquire on how those results were obtained (i.e., by providing action explanations) my wife and other users could either learn how the respective search works and to better use the search engines, or could complain about the search behaviour to the vendors. Either way, transparency provided by explanations would improve the overall performance.

Nintendo Wii meets Apple Keynote – Making a wish II

Judi also pointed out that one needs to additionally use something like FlyGesture, but this still cannot provide the whole illusion I have in mind…. For example, if you want to tear off a page or turn a page very slowly in order to uncover some secret, maybe even give only a brief glimpse at the next page and then cover the page again …

As Judi Smith pointed out there already is a possibility to use the Wii mote with the Mac. Remote Buddy lets one use a wide range of remote controls, including the Wii mote (but, unfortunately, not my older Keyspan model 😦 ). But watching the respective presentation reveals that the Wii mote is only used as a standard remote control. There is an infrared source—for example a tea-light (sic!)—necessary (for positioning, I assume). The motion-sensors are not used in the way I described earlier. In fact, the motion-sensors are not used at all. But the possibility is pointed out at the website.

Judi also points out that one needs to additionally use something like FlyGesture, but this still cannot provide the whole illusion I have in mind. There is no feedback between the execution of slide change effects and the (swiftness of the) gesture. This is why I “asked” Apple to do something about it. Apple would need to provide hooks to allow the user to control the speed of the animation. For example, if you want to tear off a page or turn a page very slowly in order to uncover some secret, maybe even give only a brief glimpse at the next page and then cover the page again … well.

[composed and posted with ecto]

Nintendo Wii meets Apple Keynote – Making a wish

Many years back I had the chance to play some very simple game in a 3d virtual reality environment…. This headset contained two small (not so high resolution) monitors in front of my eyes and some motion sensors that allowed the system to change the scenery, depending on where I turned my head.

Most of my friends and colleagues are hyped up about the Wii console. And it is the first time I can understand the hype about a game console. I even might get hooked up, too, if I am not careful 😉 Being physically active in a game and not just your mind and fingers is intriguing.

Many years back I had the chance to play some very simple game in a 3d virtual reality environment. I do not remember too much about it. I stood on a platform wearing a heavy, uncomfortable headset. The headset contained two small (not so high resolution) monitors in front of my eyes and some motion sensors that allowed the system to change the scenery, depending on where I turned my head to. It was quite an experience. But due to the hardware costs, I imagine, nothing came from it and the game disappeared. The Wii console seems to be a very good development of current play stations going into the direction of games where you are totally immersed.

Well, thinking about the Wii controllers with motion-sensing technology and some stuff I currently read about presentation skills (see, for example, the two inspirational blogs Presentation Zen and Les Posen’s CyberPsych Blog) I wondered how more interesting one could make a presentation if one had a remote control like a Wii controller for Keynote presentations. A presentation always is kind of a performance, isn’t it? Some presenters are better “actors” (more outgoing) and others are more on the quiet side. For those of us who like to make the whole stage their own and not only the square-meter behind the standing desk such a presentation tool could enhance the presentation experience and turn it into a presentation adventure.

Think about the turning cube effect of Apple Keynote (part of iWork). Now think about wanting to present some heavy material to your boss and you need all your physical power to turn the cube from left to right or, maybe even better, lift the cube to turn up. With a motion-sensing presentation tool you would be able to set up a certain resistance into slide change effects. During presentation you would then need to grab an edge of your slide and “manually”

move the slide. (Must be funny if you set up slide effects for yourself and then having someone else do the presentation who is not as strong as you.) You could even do part of your workout during a presentation! Who would have thought.

I think you get the picture. So: Apple, do something about it! It would be fun.

And, Apple, when you are at it: Think about Les Posen’s suggestion to use iPhoto for storing all of one’s Keynote slides.

[composed and posted with ecto]

Travelling without moving

Reading David Weinberger’s book “Small pieces loosely joined {a unified theory of the web}” reminded me of the famous line from Frank Herbert’s “Dune” about travelling without moving. David Weinberger in one chapter describes our perception of space and how getting documents in the “real” world differs from going to documents on the Web.

Reading David Weinberger’s book “Small pieces loosely joined {a unified theory of the web}” reminded me of the famous line from Frank Herbert’s “Dune” about travelling without moving. David Weinberger in one chapter describes our perception of space and how getting documents in the “real” world differs from going to documents on the Web. The Web folds space in a way that (most of) human knowledge is within our arm’s reach. Frank Herbert died before the Web came to pass. What would he think about his metaphor now?

[composed and posted with ecto]

Design vs. Art

For quite some time now I enjoy regularly reading Maeda’s SIMPLICITY blog, which is a constant source of inspiration for me, but it took me nearly as long to buy his small book on “The Laws of Simplicity”…. But—you already saw this ‚but‘ coming, don’t you?—it is necessary to be reminded of those things from time to time and to take your time reflecting on those experiences and lessons learned.

For quite some time now I am an avid reader of Maeda’s SIMPLICITY blog, a constant source of inspiration. But it took me nearly as long to buy his small book on “The Laws of Simplicity”. In many ways the book does not contain anything new to me (as I had been warned of). Most of its content I already have learned over time. But—you already saw this ‚but‘ coming, don’t you?—it is necessary to be reminded of those things from time to time and to take your time reflecting on those experiences and lessons learned. What strikes me most is the concentrated and fresh view, interwoven with personal believes and insights, which in the end made it so accessible and easy to relate to. It was definitely a worthwhile read!

Over the last year my private and my research life—btw, for a scientist: can there be a difference between private and research life?—gravitates towards art and design (see, for example, my actual project proposal Mnemosyne). So, John Maeda’s differentiation of art from design struck a chord in me, helping me a great deal in grasping the concepts:

“The best art makes your head spin with question. Perhaps this is the fundamental distinction between pure art and pure design. While great art makes you wonder, great design makes things clear.” (“The Laws of Simplicity”, p. 70)

As a researcher, your head most of the time spins with questions. Viewing part of one’s research as art and some the resulting systems as art work could serve as a way of channelling questions. From those pieces of art one then can work towards design, towards making things clear. This viewpoints allows for more personal freedom in approaching complicated or overwhelming research questions. Look at the problem from a (probably naïve) artistic and fun point of view. Play with the research questions! Use your right, synthesis-oriented half of your brain instead of your left, more analytic half. The ten “laws” then help channel one’s efforts.

These are, by no means, breathtakingly new insights. Research work is always about asking questions and coming up with reproducible results and valid evaluations using the right tools and approved methods. But looking at research from an art/design viewpoint makes it a tad more interesting and a bit more fun, at least for me 😉

[composed and posted with ecto]

Document-oriented vs. people-oriented access

In my lectures on the Semantic Web / Web 2.0, which started yesterday, I formulated for the first time how my habits on searching the Web changes since social software is available.

In my lecture on the Semantic Web / Web 2.0, which started yesterday, I formulated for the first time how my habits of searching the Web are changing since Social Software is available. My search habits are shifting from googling using keywords and crawling long result lists to navigating through del.icio.us accounts. From time to time I go through my shared bookmarks and check out what others commented on those bookmarks; thus, finding experts on certain topics and valuable web-sites. Web-sites I would not have found via Google, I am pretty sure of.

Of course, this is the intention of such services as del.icio.us, bibsonomy, and diigo, but I think it hints on a more subtle change of the Web. The Web is adapting more to human behaviour. Whenever possible I prefer asking colleagues and friends about topics and valuable documents to searching for myself. Social Software starts offering such a “natural” access to documents on the Web.

[composed and posted with ecto]

Scenario: A fiction author looks for mythological information

Additional information could point him to biographies on wikipedia or more specialised sources, explain used symbols, etc.As the story is set to take place at least partially in London and he will go there for location scouting he now needs a way of combining his digital search results with his trip to London…. Each itinerary describes a way through a museum, e.g., Tate Modern Art Gallery or British Museum, linking exhibits together in different ways, e.g., by dates, owners, a certain story, artists or relationships between artists.Location scouting and research work is always accompanied by taking notes and pictures.

I am currently working on a project proposal in which also the following scenario will be addressed:

Researching information is a vital task for fiction authors. Say, he is interested in writing a novel that deals with some mythological topics. As one would guess Google delivers tons of information; information one would need to sift through carefully. Say, he would like to restrict his search to museums as he wants to base the story on artifacts throughout the last 20 to 25 centuries, i.e., beginning with Egyptian history. Let us assume museums provide tags for their artifacts (and let us forget for the moment how to acquire those tags). Then a Semantic Web search and browse would deliver all kinds of artifacts classified, e.g., as paintings or sculptures. The author would get information on their creation dates, the artists, probably who knew whom (e.g., by way of friends-of-a-friend links). Additional information could point him to biographies on Wikipedia or more specialised sources, point out and explain symbols used by an artist or at a certain time, etc.

Let us assume the story is set to take place in London and he will go there for location scouting. He now needs a way of combining his digital search results with his trip to London. Several itineraries could be created from his search results; each itinerary describing a way through a museum or gallery, e.g., Tate Modern Art Gallery or The British Museum, linking exhibits together in different ways, e.g., by dates, owners, a certain story, artists or relationships between artists.

Location scouting and research work is always accompanied by taking notes and pictures. Tagging could be simplified by computer support. If a mobile device knows its GPS co-ordinates or network location, routine information could be added very easily for later processing. (Another way of identifying artifacts are Semacodes if provided by a museum.) Recording the places where the author has been could be fed in future research activities. Having been in London before would have an impact on the selection of places to visit, wouldn’t it? Either there are places that were interesting enough to gather more details from there or they already have been tagged to be unimportant.

Btw, a while ago I briefly described some other semantic support fiction authors could find interesting (see earlier post).

%d Bloggern gefällt das: