Tuesday, November 9, 2010

Future Enterprise- The Intelligent Enterprise

The enterprise of the future will increasingly depend on a wide range of rigorous artificial intelligence systems, algorithms and techniques to facilitate its operation at all levels of management.

As described in The Adaptable Enterprise blog, major decisions incorporating sophisticated levels of intelligent problem-solving will increasingly be applied autonomously and within real time constraints to achieve the level of adaptability required to survive in an ever changing and uncertain global environment. This trendline describes these techniques and their application.

A number of artificial techniques and algorithms are rapidly reaching maturity and will be an essential component of Intelligent Enterprise Architecture of the future including:

Genetic algorithms- solution discovery and optimisation modelled on the genetic operators of cross over, replication and mutation to explore generations of parameterised options.

Bayesian networks- graphical models representing multivariate probability networks; providing inference and learning based on cumulative evidence.

Fuzzy Logic- non-binary methods of decision-making -allowing information inputs to be weighted and an activation threshold established.

Swarm Intelligence- combining multiple components to achieve group intelligent behaviour.

Neural networks- pattern discrimination techniques modelled on neuron connection.

Expert Systems- rule based inference techniques targeted at specific problem areas.

Intelligent Agents- this form of AI is particularly relevant to the future enterprise architecture, because it is designed to be adaptive to the web's dynamic environment; that is, an agent is designed to learn by experience. They can also act collaboratively in societies, groups or swarms. Through swarming behaviour agents can achieve higher levels of intelligence capable of making increasingly complex decisions autonomously

The above techniques will continue to be enhanced and packaged in different combinations to provide immensely powerful problem solving capability over time. The technology is slowly being applied discretely within business intelligence, data mining and planning functions of enterprise systems.

However AI is yet to realize its full potential within the enterprise model by being applied to decision-making in a targeted autonomous fashion. When this happens over the next decade, the quality of decision-making and concommitant reduction in operational and amanagement risk is likely to be significantly improved.

Monday, November 1, 2010

Future Enterprise- Cyber-Infrastructure for World 2.0

Our future World 2.0 will face enormous challenges from now into the foreseeable future, including global warming, globalisation and social and business hyper-change.

Global Warming will create shortages of food and water and loss of critical ecosystems and species. It will require massive prioritisation and re-allocation of resources on a global scale.

Globalisation will require humans to live and work together cooperatively as one species on one planet- essential for our survival and finally eliminating the enormous destruction and loss of life that wars and conflict inevitably bring.

Social and usiness change will present myriad challenges relating to building and maintaining a cohesive social fabric to provide democracy and justice, adequate levels of health and education, solutions to urban expansion, crime prevention, transport congestion and food and water security, in a fast changing global environment. This will require adaptation on a vast scale.

It is apparent that in order to meet these challenges, humans must harness the enormous advances in computing and communications technologies to achieve a complete makeover of the world’s Cyber-Infrastructure.

The infrastructure of the new cyber reality now affects every aspect of our civilisation. In tomorrow’s globalised world a dense mesh of super-networks will be required to service society’s needs- the ability to conduct government, business, education, health, research and development at the highest quality standard.

This infrastructure will be co-joined with the intelligent Internet/web, but will require additional innovation to facilitate its operation; a transparent and adaptable heterogeneous network of networks, interoperable at all levels of society.

In the last two decades tremendous progress has been made in the application of high-performance and distributed computer systems including complex software to manage and apply super-clusters, large scale grids, computational clouds and sensor-driven self-organising mobile systems. This will continue unabated, making the goal of providing ubiquitous and efficient computing on a worldwide scale possible.

But there’s a long road ahead. It is still difficult to combine multiple disparate systems to perform a single distributed application. Each cluster, grid and cloud provides its own set of access protocols, programming interfaces, security mechanisms and middleware to facilitate access to its resources. Attempting to combine multiple homogeneous software and hardware configurations in a seamless heterogeneous distributed system is still largely beyond our capability.

At the same time tomorrow’s World 2.0 enabling infrastructure, must also be designed to cope with sustainability and security issues.
It is estimated that The ICT industry contributes 2-3% of total Greenhouse Gas emissions, growing 6% per year compounded. If this trend continues, total emissions could triple by 2020. The next generation cyber-architecture therefore needs to be more power-adaptive. Coupled with machine learning this could achieve savings of up to 70 % of total ICT Greenhouse emissions by 2020.

But the world is also grappling with the possibility of cyber-warfare as well as increasingly sophisticated criminal hacking, with an estimated 100 foreign intelligence organisations trying to break into US networks. A global protocol safeguarding cyber privacy rights between nations, combined with greater predictive warning of rogue attacks, is critically needed. The next generation of cyber-infrastructure will therefore have to incorporate autonomous intelligence and resilience in the face of both these challenges.

To meet these targets a lot will ride on future advances in the field of Self-Aware Networks- SANs. Previous blogs have emphasised the emergence of the networked enterprise as the next stage in advanced decision-making. SANs are a key evolutionary step on the path to this goal. Self-aware networks can be wired, wireless or peer-to-peer, allowing individual nodes to discover the presence of other nodes and links as required- largely autonomously. Packets of information can be forwarded to any node without traditional network routing tables, based on reinforcement learning and smart routing algorithms, resulting in reduced response times, traffic densities, noise and energy consumption.

Another major shift towards a networked world has been the rise of Social Networks. These have attracted billions of users for networking applications such as Facebook, LinkedIn, Twitter etc. These are providing the early social glue for World 2.0, offering pervasive connectivity by processing and sharing multi-media content. Together with smart portable devices, they cater to the user’s every desire, through hundreds of thousands of web applications covering all aspects of social experience– entertainment, lifestyle, finance, health, news, reference and utility management etc.

With increased user mobility, location sharing and a desire to always be connected, there is a growing trend towards personalized networks where body, home, urban and vehicle sensory inputs will be linked in densely connected meshes to intermediate specialised networks supporting healthcare, shopping, banking etc.

The explosion of social networked communities is triggering new interest in collaborative systems in general. Recent research in network science has made a significant contribution to a more profound understanding of collaborative behaviour in business ecosystems. As discussed in previous posts, networked ‘swarm’ behaviour can demonstrate an increase in collective intelligence. Such collective synergy in complex self-organising systems allows ‘smarter’ problem solving as well as greater decision agility. By linking together in strategic and operational networks, enterprises can therefore achieve superior performance than was previously possible.

The key characteristics of the smart business network of the future will be its ability to react rapidly to emerging opportunities or threats, by selecting and linking appropriate business processes. Such networks will be capable of quickly and opportunistically connecting and disconnecting relationship nodes, establishing business rules for participating members on the basis of risk and reward.
This ‘on the fly’ capacity to reconfigure operational rules, will be a crucial dynamic governing the success of tomorrow’s enterprise. CIOs must also learn to span the architectural boundaries between their own networked organisation and the increasingly complex social and economic networked ecosystems in which their organisations are embedded.

In fact the business community is now struggling to keep up with the continuous rate of innovation demanded by its users. Social network solutions have the potential to help meet this demand by shaping the design of future architectures to provide better ways to secure distributed systems.

So what is the future of this new collaborative, densely configured networked world? What we are witnessing is the inter-weaving of a vast number of evolving and increasingly autonomous networks, binding our civilisation in a web of computational nodes and relational connections, spanning personal to global interactions.

By 2050 the new World 2.0 cyber-infrastructure will link most individuals, enterprises and communities on the planet. Each will have a role to play in our networked future, as the cells of our brain do- but it will be a future in which the sum of the connected whole will also be an active player.

Friday, June 25, 2010

Future Enterprise- The Greening System

The net energy impact of an enterprise’s products and services on the community far outweighs the benefits of any savings in its computer processing operations.

Saving energy in the 21st century’s computing ecosystem is a vital component in achieving the goal of a sustainable society and is currently being addressed within the context of numerous emerging technologies including- flexible cloud processing, low-energy mobile and sensor communications, outsourcing of services, infrastructure virtualisation, application integration, embedded electronics and low energy processor design.

But of far more significance is the potential role of information and computing technology in reducing carbon emissions in most of today’s service processes- whether relating to power generation, manufacturing, transport, service delivery etc.

This revolution, using the computer as the most effective green machine ever designed, is rapidly taking shape with the emergence of the ‘smarter planet’ mantra. This has already been adopted by every major systems and software provider including- IBM, Cisco, Google, SAP, Apple, Intel, Microsoft and Oracle and promises the optimisation of the planet’s infrastructure.
This will presage more efficient healthcare, education, communication, utility and government services, as well as higher quality industry outcomes in construction, mining, travel, engineering, agriculture etc, by applying the latest advances in artificial intelligence, design, materials, electronics, computing and control sciences.

As well as the enormous energy reduction payoffs of smarter infrastructure, the ‘smarter planet’ will manifest in a limitless number of areas including-

Simulation-based Engineering- solving previously intractable design problems and achieving significant cost and energy reductions by applying computer simulated models and prototypes for testing purposes:

Transportation Systems- managing major traffic flows and supply chains, which will demand increasingly complex integration and scheduling via multi-modal transport networks:

Developing Nations Environments - allowing the populations of these countries to join the developed networked knowledge world and gain leverage through the application of cheap sensors and low cost intelligent mobile devices to help solve complex environmental and resource allocation problems.

Such global energy reduction potential, gained by using the computer to generate overall outcome savings are indisputable and in fact totally dwarf the benefits gained from optimising computer processing as an end in itself.

But greater sustainability benefits are also conditional on the performance and effectiveness of computer processing, with real-time, event-driven applications becoming increasingly common. Computer processing energy gains must therefore evolve within the constraints of process performance needs. Higher performance processing may be more energy intensive, but still deliver far greater benefits in terms of outcome energy savings; so that deriving an optimum trade-off between energy input efficiency and performance output efficiency will be critical.

But an even more significant energy paradigm is emerging, which encompasses the capacity of the enterprise to deliver the sustainable benefits of its services to the wider community.

In the final analysis it is the enterprise that is the primary implementer of services to its customers- whether individuals or businesses. These are the beneficiaries or otherwise of its products and services.
A General Motors that keeps churning out gas-guzzling vehicles, totally unsuited to a greener environment and its customer’s needs, may do major harm to the planet no matter how efficient or sophisticated its computerised operational systems.

What this boils down to is the role of the future enterprise as the most relevant greening system in relation to the communities it services. It is the enterprise- small, large, public or private, which is the key enabling system to achieving a greener world.

Tomorrow’s enterprise will be the primary harnesser of human mind power, amplified by expanding computational intelligence in our world. Its potential therefore to create a greener future through its impact on the wellbeing of the wider community is what ultimately should be assessed as its true value to society.

Friday, April 30, 2010

Future Enterprise- Future Brain Architecture

Is today’s enterprise, including its IT acolytes, missing something very obvious and vitally important in its current management mindset or is it just an inability by a traditionally conservative constituency, to accept the radical paradigm shift involved?

Enterprise IT is beginning to dip its toe in the water and borrow some of its inspiration from biological models. For example, a number of the most valuable AI techniques routinely applied in business- genetic algorithms, neutral networks, DNA and swarm computation, are biologically based, as is the concept of the organisation as a complex ecosystem, rather than a rigid hierarchical structure, largely disconnected from its environment.

Networks are also getting a look-in. Complex decision-making, using elements of autonomous, self-organising and intelligent networks, incorporating complex feedback loops to monitor operational performance and enhance relationships with customers and suppliers, are now being trialled.

But the current enterprise management model is still missing the big picture- the shift towards an efficient, self-regulating, self-organising, self-evolving framework, so critical for survival in a future fast-moving, uncertain physical and social environment.
The most efficient blueprint for such an architecture and one honed over billions of years and governing all animal life, is the living brain; in particular the advanced human brain.

For the last thirty years, since the advent of computerised imaging techniques, scientists have been trying to prise open the secrets of the brain’s incredible power and flexibility. Not just how it computes so efficiently, but its ability to adapt, evolve and manage its 100 billion neurons and dozens of specialised structures, as well as all the relationships of the body’s incredibly rich cellular processes, organs and bio-systems. It has also mastered the capacity to flexibly adapt to a vast number of environmental challenges- both physical and social, while at the same time continuing to evolve and grow its intelligence at the individual, group and species level.

If only it was possible to harness this most complex object in the universe, to manage our own still-primitive, nascent organisational structures.

So what’s the secret to the brain’s incredible success in guiding the human race through its evolutionary odyssey? Well finally the creativity and perseverance of countless dedicated scientists is starting to pay dividends, with two recent major conceptual breakthroughs-
A Unified Theory of the Brain and the key to the Sub-conscious Brain.

Current theories of the mind and brain have primarily focussed on defining the mental behaviour of others using the brain’s mirror neurons. These are a set of specialized cells that fire when an animal observes an action performed by another. Therefore, the neurons ‘mirror’ or reflect the behaviour of the other, as though the observer was itself acting. Such neurons have been directly observed in primates and more recently humans and are believed to exist in other species, such as birds.

However despite an increasing understanding of the role of such mechanisms in shaping the evolution of the brain, current theories have failed to provide an overarching or unified framework, linking all mental and physical processes- until recently. A group of researchers from the University College London headed by neuroscientist Karl Friston, have now derived a mathematical framework that provides a credible basis for such a holistic theory.

This is based on Bayesian probability theory, which allows predictions to be made about the validity of a proposition or phenomenon based on the evidence available. Friston’s hypothesis builds on an existing theory known as the “Bayesian Brain”, which postulates the brain as a probability machine that constantly updates its predictions about its environment based on its perception, memory and computational capacity. In other words it is constantly learning about its place in the world by filtering input knowledge through a statistical assessment process.

The crucial element in play, is that these encoded probabilities are based on cumulative experience or evidence, which is updated whenever additional relevant data becomes available; such as visual information about an object’s location or behaviour. Friston’s theory is therefore based on the brain as an inferential agent, continuously refining and optimising its model of the past, present and future.

This can be seen as a generic process applied to all functions and protocols embedded in the brain; continually adapting the internal state of its myriad neural connections, as it learns from its experience. In the process it attempts to minimise the gap between its predictions and the actual state of the external environment on which its survival depends.

Minimising this gap or prediction error is crucial and can be measured in terms of the concept of ‘free energy’ used in thermodynamics and statistical mechanics. This is defined as the amount of useful work that can be extracted from a system such as an engine and is roughly equivalent to the difference between the total energy provided by the system and its waste energy or entropy. In this case the prediction error is equated to the free energy of the system, which must be minimised as far as practical if the organism is to continue to develop.

All functions of the brain have therefore evolved to reduce predictive errors to enhance the learning process. When the predictions are right, the brain is rewarded by being able to respond more efficiently and effectively, using less energy. If it is wrong, additional energy is required to find out why and formulate a better set of predictions.

The second breakthrough has come from a better understanding, again through neuro-imaging, of the brain’s subconscious processes. It’s been revealed that the brain is incredibly active, even when a person is not purposely thinking or acting, for example when daydreaming or asleep. It is in fact keeping subliminal watch, communicating, synchronising and prepping its networks for a conscious future action or response; continuously organising and refining its neural systems such as the cortex and memory; in the process using up to twenty times as much energy as the conscious mode of operation requires. This mechanism is called the brain’s default mode network or DMN and has only been recently recognised as a cogent system in its own right.

Now fast forward to the future enterprise, running under an architecture that incorporates these two knowledge breakthroughs. What are the additional benefits over the old model? Not too difficult to deduce.

Any organisation that is capable of constantly and seamlessly monitoring itself in relation to its internal functions and external environment; assessing its performance against its predictions and requirements in real-time through efficient feedback mechanisms; being aware of changes in its environment and opportunities to improve its performance and productivity; self-optimising its functions and goals; self-correcting its actions, searching autonomously for the best solutions for performing complex decision-making and constantly building on its experience and intelligence – must mark a vast improvement over the current model.

Not only that- this model has been tested and operationally proven in the cauldron of evolution over the past 5 billion years. Not a bad benchmark!
Too difficult to introduce into mainstream enterprise operations? I don’t think so, not in an era when we can build the world wide web, space-stations, large particle colliders, models of galaxies and the multiverse, apply genetic engineering techniques to solve diseases, grow new organs from stem cells and put a man on Mars!

Monday, April 12, 2010

Future Enterprise- Rebirthing Hal

The arrival of super smart evolutionary computers, capable of autonomous reasoning, learning and emulating the human-like behaviour of the mythical HAL in Arthur C. Clarke’s Space Odyssey 2001 is imminent.

The Darwinian evolutionary paradigm has finally come of age in the era of super -computing. The AI evolutionary algorithm which now guides many problem solving and optimisation processes, is also being applied to the design of increasingly sophisticated computing systems. In a real sense, the evolutionary paradigm is guiding the design of evolutionary computing, which in turn will lead to the development of more powerful evolutionary algorithms. This process will inevitably lead to the generation of hyper-smart computing systems and therefore advanced knowledge; with each evolutionary computing advance catalysing the next in a fractal process.

Evolutionary design principles have been applied in all branches of science and technology for over a decade, including the development of advanced electronic hardware and software, now incorporated in personal computing devices and robotic controllers.
One of the first applications to use a standard genetic algorithm was the design of an electronic circuit which could discriminate between two tone signals or voices in a crowded room. This was achieved by using a Field Programmable Gateway Array or FPGA chip, on which a matrix of transistors or logic cells was reprogrammed on the fly in real time. Each new design configuration was varied or mutated and could then be immediately tested for its ability to achieve the desired output- discriminating between the two signal frequencies.

Such evolutionary-based technologies provide the potential to not only optimise the design of computers, but facilitate the evolution of self-organisational learning and replicating systems that design themselves. Eventually it will be possible to evolve truly intelligent machines that can learn on their own, without relying on pre-coded human expertise or knowledge.

In the late forties, John von Neumann conceptualised a self-replicating computer using a cellular automaton architecture of identical computing devices arranged in a chequerboard pattern, changing their states based on their nearest neighbour. One of the earliest examples was the Firefly machine with 54 cells controlled by circuits which evolved to flash on and off in unison.

The evolvable hardware that researchers created in the late 90’s and early this century was proof of principle of the potential ahead. For example, a group of Swiss researchers extended Von Neumann's dream by creating a self-repairing, self-duplicating version of a specialised computer. In this model, each processor cell or biomodule was programmed with an artificial chromosome, encapsulating all the information needed to function together as one computer and capable of exchanging information with other cells. As with each biological cell, only certain simulated genes were switched on to differentiate its function within the body.

A stunning example of the application of Darwinian principles to the mimicking of life was development of the CAM-Cellular Automata Machine Brain in 2000. It contained 40 million neurons, running on 72 linked FGPAs of 450 million autonomous cells. Also the first hyper-computer- HAL-4rw1 from Star Bridge Systems reached commercial production in 2000. Based on FPGA technology it operated at four times the speed of the world's fastest supercomputer.
And at the same time NASA began to create a new generation of small intelligent robots called ‘biomorphic’ explorers, designed to react to the environment in similar ways to living creatures on earth.

Another biological approach applied to achieve intelligent computing was the neural network model. Such networks simulate the firing patterns of neural cells in the brain, which accumulate incoming signals until a discharge threshold is reached, allowing information to be transmitted to the next layer of connected cells. However, such digital models cannot accurately capture the subtle firing patterns of real-life cells, which contain elements of both periodic and chaotic timing. However the latest simulations use analogue neuron circuits to capture the information encoded in these time-sensitive patterns and mimic real-life behaviour more accurately.
Neural networks and other forms of biological artificial intelligence are now being combined with evolutionary models, taking a major step towards the goal of artificial cognitive processing; allowing intelligent computing systems to learn on their own and become experts in any chosen field.

Eventually it will be possible to use evolutionary algorithms to design artificial brains, augmenting or supplanting biological human cognition. This is a win-win for humans. While the biological brain, with its tens of billions of neurons each connected to thousands of others, has assisted science to develop useful computational models, a deeper understanding of computation and artificial intelligence is also providing neuroscientists and philosophers with greater insights into the nature of the brain and its cognitive processes.

The future implications of the evolutionary design paradigm are therefore enormous. Universal computer prototypes capable of continuous learning are now reaching commercial production. Descendants of these systems will continue to evolve, simulating biological evolution through genetic mutation and optimisation, powered by quantum computing. They will soon create capabilities similar to those of HAL in Arthur Clarke's "Space Odyssey 2001"- and only a few decades later than predicted.

However the reincarnation of the legendary HAL may in fact be realised by a much more powerful phenomena incorporating all current computing and AI advances - the Intelligent World Wide Web. As previously discussed, this multidimensional network of networks, empowered by human and artificial intelligence and utilising unlimited computing and communication power, is well on the way to becoming a self-aware entity and the ultimate decision partner in our world.

Perhaps HAL is already alive and well.

Thursday, February 25, 2010

Future Enterprise- Model-Based Development

Software and system development needs to seriously grow up- and fast. It urgently needs to become far more rigorous and dependable if it’s to have any chance of meeting critical 21st century process engineering requirements. Model-Based Development- MBD might be the answer.

Two factors have conspired to transform it from adolescence to maturity.

Firstly, the rapidly increasing complexity of modern computer systems, applied more frequently in life critical contexts.
Secondly, the relentless pace of change driving process and system obsolescence.

The increasing complexity of modern computer software threatens to place an upper limit on our capacity to improve and optimise the primary processes governing our civilisation. Modern society is built around the delivery of precise real-time processes and services, which must increasingly meet critical benchmarks of efficiency, integrity, transparency and adaptability.

Even generic applications such as operating software, office management and resource planning systems etc, require hundreds of software engineers to develop and maintain them. But that degree of complexity ramps up exponentially for larger automated systems covering the range of enterprise, government and scientific applications- supply chains, production and process control, social and media software, communications, space, energy, engineering transport and disaster management services.

Complex software systems also need to constantly evolve to meet the latest shift in business and environmental pressures and practice. As a result, errors and poor quality performance built into early versions can quickly compound, with the system ending up in gridlock and malfunctioning.

Even worse, the problem is escalating as computer scientists and engineers push the boundaries of the possible; seeking to integrate diverse applications across multiple platforms, while at the same time implementing advanced solutions incorporating augmented reality, intelligent agents and location-based awareness. The problems of complexity and change can only get worse as solutions are required relating to the next generation of super systems managing global warming impacts, smart AI and sensor-embedded infrastructure and ecosystem evolution.

But there’s light at the end of the tunnel and it revolves around implementing Model-Based Development- MBD methods, incorporating mathematically verifiable design and testing.

MBD, as its name suggests, creates models of the required functions linked to a specific domain, which refers to a particular knowledge field, such as a manufacturing, telephone networks or weather monitoring. Software design therefore starts with high-level domain characteristics and properties, rather than a set of generic computing functions. In the MBD paradigm, the domain expert can review the model and point out missing functionality or essential links between elements within the system, without needing to understand sophisticated programming techniques.

The methodology depends primarily on the use of domain specific modelling languages that can be used to simulate and authenticate a system graphically before building it- as is common for current CAD systems. The use of such languages allows developers to create a formal model of the system, run it on a workstation and analyse its performance with automated tools. Finally code and test cases can be generated and automatically verified. Use of such tools in the software development lifecycle has the potential for substantial payoff, by avoiding many costly process malfunctions and reworking iterations.

Generic modelling languages such as UML are in wide use, but often result in large, complex models while Domain Modelling languages can incorporate relevant business rules and design concepts related specifically to the domain in a much more compact form.

Control system engineering and science provide the role model for this approach, based on formal logic and algorithms developed over many years. Although formal software and mathematical methods have been used for safety and security critical systems in applications such as nuclear power, chemical plants, space and defence they have not achieved widespread use in commercial or industrial software engineering. However this is likely to change as several key trends now begin to make this a more practical proposition.

First there is growing acceptance of model-based development for the design of embedded systems using toolsets such as MATLAB Simulink. This allows for rapid prototyping and design verification of test control and signal processing, particularly in avionics and electronic automotive systems.

Second is the growing power of formal verification tools, particularly model checkers. This software examines all possible combinations of input/output states and is therefore much more likely to find design errors than traditional testing.

The entire system is mapped and developers then can create the code and integrate it. Software development and maintenance is accelerated because the programmers have a clear idea of all the required functionality and how it relates to other elements in the system before coding.

Finally there is also much less risk of catastrophic programming errors, because engineers can detail the links between software elements beforehand, similar to CAD technology. If a component is missing or has been overlooked, it can be easily added to the model in a later step.

With the growing acceptance of MBD techniques, software development might finally have come of age.

Thursday, January 7, 2010

Future Enterprise- Convergence of X-Reality

First there was Virtual Reality-the creation of simulated games, objects and avatars; narratives embedded in online virtual worlds such as Second Life and World of Warcraft, with 15 million subscribers.

Then came Augmented Reality- created by integrating or mixing real objects and natural spaces with layers of related computer-generated data,images and designs; enabling real and virtual scenarios to be seamlessly combined. Basic forms of AR technology are already being used to gain a more immediate and accurate sense in practical applications such as engine repairs, wiring assembly, architectural design and remote surgery.

But now emerging from the evolution of cyberspace is Cross or X- Reality, with the boundaries between the real and the virtual extended yet again and becoming increasingly blurred in the process.

X- Reality environments essentially fuse two technologies- sensor networks and virtual worlds, bringing real world and realtime information into fully immersive virtual worlds and vice versa.

In hindsight it can be seen that Virtual and Augmented Realities are early phases in an ongoing evolutionary transition towards the acceptance of virtual forms as part of everyday human cognition. In the process we have crossed the threshold into a new space, extending human perception and interaction; linking ubiquitous sensory and actuator networks based on low cost microelectronic wireless technologies to create mixed realities.

The game is now on. By 2030, X-Reality will usher in an era of vastly extended reality indistinguishable from our present world, which has evolved over the period of life’s existence. In other words the world is evolving its own electronic nervous system via a dense mesh of sensory networks, eventually connecting and encompassing every object- living and non-living, on the planet. Such sensor networks help integrate physical reality into virtual computing platforms generating the ability to react to realworld events in automated fashion. This is creating a revolutionary relationship between human society and the Web, with the urgent need to understand the way our behaviour and future processes will become irreversibly shaped by cyberspace.

Cross reality environments can therefore serve as an essential bridge across sensor networks and Web based virtual worlds. The Web is already beginning to host an immersive 3D sensory environment that combines elements of social and virtual worlds with increasingly dense geographical mapping applications, allowing the monitoring and planning of natural and urban ecosystems- particularly its capacity to cope with climate change.

X-reality will be implemented according to the integration of key design technologies including-

Synchronously Shared Information- users will require open access to realtime data feeds and collection of information for analysis via centralised virtual command centres. Eventually control will devolve to decentralised self-organising and autonomous management systems working in partnership with users.

Complex Realtime Visualisation - users must be able to easily and flexibly visualise complex data, often delivered in 3D form. This will involve a high level of interactivity and collaboration, applying sensor-driven animation and the application of intelligent agents or avatars.

Ubiquitous Sensor Portals- such I/O devices designed for rich two-way cross-reality experiences, which can stream virtual and remote phenomenon into the user’s physical space; for example via video feeds and images uploaded from cameras. But this process can also extend into the past, allowing realtime access to historical data streams, vital for trendline analysis in business and the sciences.

Smart Phones- these will increasingly provide an intuitive interface that facilitates group collaboration in an ad hoc manner, via gesture as well as touch. Physical movement for outdoor users requires extreme mobility. Allowing augmented reality on smart phones that can query sensor networks and connect with shared online worlds paves the way for immersive mobile X-Reality.

Complex Event Processing- CEP- sensor networks will be particularly valuable in the future for generating data that tracks complex phenomenon in the real world, detectable by high-level pattern matching and logic inference techniques. Applications include- monitoring building and infrastructure maintenance, manufacturing and supply chain operations via RFIDs as well as environmental emergencies such as fire and pollution risks. In addition, CEP systems will help make sense of conflict zones, ecosystem health, field operative performance and traffic flows and events.

By 2030 most of our lives will be totally immersed in this shared reality. It will also redefine how we manage the vast and growing repository of digital information on the web- linking art, entertainment, work, science and daily life routines such as shopping, gaming and travel.

The Future Enterprise will be equally enmeshed- dependent on the management of its marketing, production and logistical operations and services via the medium of X-Reality.

Sunday, January 3, 2010

Future Enterprise- Adaptive Business Intelligence

The concept of adaptability is rapidly gaining in popularity in business. Adaptability has already been introduced into everything from automatic car transmissions to sentient search engines to running shoes capable of adapting to the preferences of each unique user over time, to business management.

Adaptive business intelligence is a new discipline which combines three components- prediction, adaptation and optimisation. It can be defined as the discipline of using prediction and optimisation techniques to create self-learning decision systems.

Managers work in a dynamic and ever-changing economic and social environment and therefore require constant decision support in two linked timeframes- what is the best decision to make now and how will this change in the future.

The general goal of most current business intelligence systems is to access data from a variety of sources, to transform it into information and knowledge via sophisticated analytic and statistical tools and provide a graphical interface to present the results in a user friendly way. However this doesn’t guarantee the right or best decision outcomes.

Today most business managers realise that a gap still exists between having the right information and making the right decision. Good decision-making also involves constantly improving future recommendations- adapting to changes in the marketplace and improving the quality of decision outcomes over time. This involves a shift towards predictive performance management- moving beyond simple metrics to a form of artificial intelligence based software analysis and learning such as evolutionary algorithms.

Future Trends

The future of business intelligence therefore lies in the development of systems that can autonomously and continuously improve decision-making within a changing business environment, rather than tools that just produce more detailed reports based on current static standards of quality and performance.

It must incorporate techniques that build autonomous learning, with feedback loops that generate prediction and optimisation scenarios to recommend high-quality decision outcomes; but also with an in-built capacity to continuously improve future recommendations.

The importance of such an evolutionary paradigm wil be esential in an increasingly competitive and complex business environment. It is regressive to continue to rely on software support systems that repeatedly produce sub-optimal demand forecasts, workflows or planning schedules.

The future of business intelligence lies in systems that can guide and deliver increasingly smart decisions in a volatile and uncertain environment.