Principles

Future Surge advocates increasingly widespread review, adoption, and adherence to a set of principles known as the Singularity Principles:

  • Principles that will make a critical difference between society reaching a “positive singularity” and a “negative singularity”
  • Principles that apply to the anticipation and management of cataclysmically disruptive technologies
    • The NBIC convergence: Nanotech, Biotech, Infotech, and Cognotech
    • The likely forthcoming transformation of Artificial Intelligence (AI) into Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI)

The simple statement of the Singularity Principles is:

  • Think harder in advance about the possible consequences of developing and deploying technology in various ways
  • Monitor technology closely once it is deployed, being ready to intervene in case of any surprise.

The full version of the Singularity Principles splits into four areas:

  1. Methods to analyse the goals and outcomes that may arise from particular technologies
  2. The characteristics that are highly desirable in technological solutions
  3. Methods to ensure that development takes place responsibly
  4. Evolution and enforcement:
    • How this overall set of recommendations will evolve further over time
    • How to increase the likelihood that these recommendations are applied in practice rather than simply being some kind of wishful thinking.

For the Principles in each of these four areas, read on.

Analysing goals and potential outcomes

(A longer version of this section can be found here.)

Once projects are started, they can take on a life of their own.

It’s similar to the course taken by the monster created by Dr Frankenstein in Mary Shelley’s ground-breaking novel. A project – especially one with high prestige – can acquire an intrinsic momentum that will carry it forward regardless of obstacles encountered along the way. The project proceeds because people involved in the project:

  • Tell themselves that there’s already a commitment to complete the project
  • View themselves as being in a winner-takes-all race with competitors
  • Feel constrained by a sense of loyalty to the project
  • Perceive an obligation to fellow team members, or to bosses, or to others who are assumed to be waiting for the product they are developing
  • Fear that their pay will be reduced, and their careers will stall, unless the project is completed
  • Desire to ship their product to the world, to show their capabilities.

But the result of this inertia could be outcomes that are, later, bitterly regretted:

  • The project produces results significantly different to those initially envisioned
  • The project has woeful unexpected side-effects
  • Even if it is successful, the project may consume huge amounts of resources that would have been better deployed on other activities.

Accordingly, there’s an imperative to look before you leap – to analyse ahead of time the goals and potential outcomes we can expect from any particular project. And once such a project is underway, that analysis needs to be repeated on a regular basis, taking into account any new findings that have arisen in the meantime. That’s as opposed to applying more funding and other resources regardless.

The bigger the potential leap, the greater the need to look carefully, beforehand, at where the leap might land.

The first six of the Singularity Principles act together to improve our “look ahead” capability:

Question desirability

  • Prioritise understanding the requirements (outcomes), rather than becoming preoccupied with particular tech solutions
  • Challenge assumptions about which outcomes are desirable
  • Challenge assumptions about the best ways to achieve these outcomes
  • Be ready to update these assumptions in the light of improved understanding
  • Avoid taking for granted that agreement exists on what will count as a “good” outcome

Clarify externalities

  • Consider possible wider impacts (both positive and negative) from the use of new products and methods, beyond those initially deemed central to their operation
  • Be sure to include these externalities in cost/benefit calculations
  • Move beyond profit margin, efficiency, and time-to-market (etc.)
  • Include broader measurements of human flourishing

Require peer reviews

  • A team with a good track record, and with apparently outstanding talent, may still make serious mistakes in their plans for a project
  • Novel project issues can break a previous run of success
  • Therefore involve independent external analysts in a check on the plans proposed

Involve multiple perspectives

  • The peer review phase, into the proposed goals and likely outcomes of a project, should involve people with multiple different skill sets and backgrounds (ethnicities, life histories, etc)
  • These reviewers should include not just designers, scientists, and engineers, but also people with expertise in law, economics, and human factors
  • A preoccupation with a single discipline or a single perspective could result in the project review overlooking important risks or opportunities

Analyse the whole system

  • When analysing the potential upsides and downsides of using some new technology that we have in our mind, it’s vital to consider possible parallel changes in the wider “whole system”
  • The “whole system” is the full set of things that are connected to the technology that could be developed and deployed – upstream influences, downstream magnifiers, and processes that run in parallel
  • It also includes human expectations, human beliefs, and human institutions
  • It includes aspects of the natural environment that might interact with the technology.
  • Critically, it also includes other technological innovations
  • This kind of analysis might lead to the conclusion that a piece of new technology would, after all, be more dangerous to deploy than was first imagined
  • Or it could lead to changes in aspects of the design of the new technology, so that it would remain beneficial even if these other alterations in the environment took place

Anticipate fat tails

  • Bear in mind that not every statistical distribution follows that of the famous Normal curve, also known as the Gaussian bell curve
  • Initial observations of some data might lead us astray; the preconditions for the distribution of results being Normal might not apply
  • These preconditions require that the outcomes are formed from a large number of individual influences which are independent of each other
  • When, instead, there are connections between these individual influences, the distribution can change to have what are known as “fat tails”
  • In such cases, outcomes can arise more often that are at least six sigma away from the previously observed mean – or even twenty sigma away from it – taking everyone by a horrible surprise
  • That possibility would change the analysis from “how might we cope with significant harm”, such as a result three sigma away from the mean, to “could we cope with total ruin”, such as a result that is, say, twenty sigma distant
  • In practical terms, this means that plans for the future should beware the creation of monocultures that lack sufficient diversity – cultures in which all the variations can move in the same direction at once
  • We should also beware the influence of hidden connections, such as the shadow links between multiple different financial institutions that precipitated the shock financial collapse in 2008
  • The takeaway: the mere fact that performance trends seem to be well behaved for a number of years provides no guarantee against sharp ruinous turns of fortune.

Desirable characteristics of technological solutions

(A longer version of this section can be found here.)

The next six Singularity Principles promote characteristics that are highly desirable in technological solutions.

Reject opacity

  • Be wary of technological solutions whose methods of operation we don’t understand
  • These solutions are called “opaque”, or “black box”, because we cannot see into their inner workings in a way that makes it clear how they are able to produce the results that they do
  • This is in contrast to solutions that can be called transparent, where the inner workings can be inspected, and where we understand why these solutions are effective
  • The principle states that we should resist scaling up such a solution from an existing system, where any failures could be managed, into a new, larger, system where any such failures could be ruinous
  • Instead, more work is needed to make these systems explainable – and to increase our certainty that the explanations provided accurately reflect what is actually happening inside the technology, rather than being a mere fabrication that is unreliable

Promote resilience

  • We should prioritise products and methods that make systems more robust against shocks and surprises
  • If an error condition arises, or an extreme situation, a resilient system is one that will take actions to reverse, neutralise, or otherwise handle the error, rather than such an error tipping the system into an unstable or destructive state
  • An early example of a resilient design was the so-called centrifugal governor, or flyball governor, which James Watt added to steam engines; when they rotated too quickly, the flyballs acted to open a valve to reduce the pressure again
  • Another example is the failsafe mechanism in modern nuclear power generators, which forces a separation of nuclear material in any case of excess temperature, preventing the kind of meltdown which occasionally happened in nuclear power generators with earlier designs

Promote verifiability

  • We should prioritise products and methods where it is possible to ascertain in advance that the system will behave as specified, without having bugs in it
  • We should also prioritise products and methods where it is possible to ascertain in advance that there are no significant holes in the specification, such as failure to consider interactions with elements of the environment, or combination interactions
  • Note that this principle goes beyond saying “verify products before they are developed and deployed”; it says that products should be designed and developed using methods that support thorough and reliable verification

Promote auditability

  • It must be possible to monitor the performance of the product in real-time, in such a way that alarms are raised promptly in case of any deviation from expected behaviour
  • Systems that cannot be monitored should be rejected
  • Systems that can be monitored but where the organisation that owns the system fails to carry out audits, or fails to investigate alarms promptly and objectively, should be subject to legal sanction
  • Note that this principle goes beyond saying “audit products as they are used”. It says that products should be designed and developed using methods that support thorough and reliable audits

Clarify risks to users

  • It’s important to be open to users and potential users of a piece of technology about any known risks or issues with that technology
  • (Here, the word “user” includes developers of larger systems that might include the original piece of technology in their own constructions)
  • The kinds of risks that should be clarified, before a user starts to operate with a piece of technology, include:
    • Any potential biases or other limitations in the data sets used to train these systems
    • Any latent weaknesses in the algorithms used (including any known potential for the system to reach unsound conclusions in particular circumstances)
    • Any potential security vulnerabilities, such as risks of the system being misled by adversarial data, or having its safety measures being edited out or otherwise circumvented

Clarify trade-offs

  • This principle recognises that designs typically involve compromises between different possible ideals; these ideals sometimes cannot all be achieved in a single piece of technology
  • For example, different notions of fairness, or different notions of equality of opportunity, often pose contradictory requirements on an algorithm
  • Rather than hiding that design choice, it should be drawn to the attention of users of the technology
  • These users will, in that case, be able to make better decisions about how to configure or adapt that technology into their own systems
  • Another way to say this is that technology should, where appropriate, provide mechanisms rather than policies; the choice of policy can, in that case, be taken at a higher level

Ensuring development takes place responsibly

(A longer version of this section can be found here.)

The next five Singularity Principles cover methods to increase the likelihood that development takes place responsibly.

Insist on accountability

  • This principle aims to deter developers from knowingly or recklessly cutting key corners in the way they construct and utilise technology solutions
  • A lack of accountability often shows up in one-sided licence terms that accompany software or other technology
    • These terms avoid any acceptance of responsibility when errors occur and damage arises
    • If something goes wrong with the technology, these developers effectively shrug their shoulders regarding the mishap
    • That kind of avoidance needs to stop
  • Instead, legal measures should be put in place that incentivise paying attention to, and adopting, methods that are most likely to result in safe, reliable, effective technological solutions
  • The effectiveness of these measures will require:
    • Regular reviews to check that no workarounds are being used, that allow developers to conform to the letter of the law whilst violating its spirit
    • High-calibre people who are well-informed and up-to-date, working on the definition and monitoring of these incentives
    • Society providing support to people in these roles of oversight and enforcement, via paying appropriate salaries, providing sufficient training, and protecting legal agents against any vindictive counter suits

Penalise disinformation

  • Penalties should be applied when people knowingly or recklessly spread wrong information about technological solutions
  • Communications that distort or misrepresent features of a product or method should result in sanctions, proportionate to the degree of damage that could ensue
  • An example would be if a company notices problems with its products, as a result of an audit, but fails to disclose this information, and instead insists that there is no issue that needs further investigation
  • This will require
    • High-calibre people who are well-informed and up-to-date, working on the definition and monitoring of what counts as disinformation
    • The payment and training of such people is likely to need to be covered from public funds

Design for cooperation

  • There should be a strong preference for collaboration on matters of safety
  • That’s in contrast to a headlong competitive rush to release products as quickly as possible, in which short-cuts are taken on quality
  • Cooperation often needs to be “designed into the framework” (at the technical or social levels) rather than arising spontaneously from marketplace interaction
  • For example, public policy could give preferential terms to solutions that share algorithms as open source, without any restriction on other companies using the same algorithms
  • Related, a careful reconsideration is overdue of the costs and benefits of intellectual property rules
  • Public funding and resources should also be provided to support the definition and evolution of appropriate open standards, enabling the spirit of “collaborate before competing”

Analyse via simulations

  • Systematic attention be given to simulation environments in which products and methods can be analysed in advance of real-world deployment, with a view to uncovering potential surprise developments that may arise in stress conditions
  • Note that designing and using test environments in an efficient, effective way is a major engineering discipline in its own right:
    • There’s little point in repeating the same test again and again with little variation; that would consume resources and delay product release with little additional benefit
    • Testing is, therefore, a creative activity
    • On the other hand, the more that test processes can be automated, the easier it can be to ensure they are completed in a comprehensive manner
  • Inevitably, each simulation environment is likely to have its own limitations and drawbacks: they won’t fully anticipate all the eventualities that may occur in real world situations
  • However, over time, these simulations can and should improve, becoming more and more useful, and more and more reliable
  • Creating and maintaining best-in-class simulations is likely to require the support of public funding and resources

Maintain human oversight

  • Although recommendations for next steps in developing products and methods will increasingly originate from software or AI, control needs to remain in human hands
  • We must ensure that such proposals arising from automated systems are reviewed and approved by an appropriate team of human overseers
  • That’s because our AI systems are, for the time being, inevitably limited in their general understanding
  • Rather than relying on the analysis of a single AI review system, we should look for ways to have multiple different independent AIs review the recommendations for product development; but in all cases, the final decisions in any contentious or serious matter should pass through human oversight

Evolution and enforcement

(A longer version of this section can be found here.)

The final area of the Singularity Principles covers how the overall set of recommendations is itself likely to evolve over time, and how the recommendations will be applied in practice rather than simply being some kind of wishful thinking.

Since they bridge what could be a yawning gulf between aspiration and actuality, these principles can be seen as the most important in the entire set.

Build consensus regarding principles

  • This set of principles should be discussed widely, to ensure broad public understanding and buy-in, with conformance in spirit as well as in letter
  • In this way, the principles can become viewed as being collectively owned, collectively reviewed, and collectively endorsed, rather than somehow being imposed from outside
  • Indeed, society should be ready to update these principles if discussion makes such a need clear – provided the potential change has been carefully reviewed beforehand; there is no assumption of “tablets of stone”
  • We can be guided in this discussion by applying many of the Singularity Principles, which were initially about the development of technology, to the principles themselves

Provide incentives to address omissions

  • Where any of this set of principles cannot be carried out adequately, measures should be prioritised to make available additional resources or suitably skilled personnel, so that these gaps can be filled
  • This may involve extra training, provision of extra equipment, transfer of personnel between different tasks, altering financial incentive structures, updating legal rules, and so on
  • However, if the gap grows too large, between the recommendations of the principles, and prevailing industry practices, something more drastic is needed – hence the final two principles in the set.

Halt development if principles not upheld

  • In case any of these Singularity Principles cannot be carried out adequately, and measures to make amends are blocked, any further development of the technology in question should be halted until such time as the principles can once again be observed
  • This may be viewed as a shocking principle, but it was applied very successfully as part of the revolutionary lean manufacturing culture developed in Toyota in Japan from the 1930s onward
    • Toyota executives realised it was actually to the competitive advantage of their company if each and every employee on their production line was able, on noticing a significant problem with the production, to pull a cord to temporarily halt production of that product
    • The brake meant that wide attention was quickly brought to bear on whatever quality issue had been noticed
    • Production throughput slowed down in the short term, but quality throughput and reliability increased in the medium and longer term.

Consolidate progress via legal frameworks

  • Aspects of these principles should be embedded in legal frameworks, to make it more likely that they will be followed
  • There needs to be appropriate penalties for violating these frameworks, just as there are already penalties in place when companies violate any of a range of existing regulations on health and safety, or on truthfulness in advertising, or on the presentation of financial information
  • These legal frameworks will need to have trenchant backing from society as a whole; after all, some of the companies that are rushing ahead to create more powerful technologies have huge financial motivations to evade legal restrictions
    • These companies are receiving extensive investments, from banks or venture capitalists, under the assumption that they can produce and maintain a decisive competitive advantage
    • They are motivated to keep many of their plans under tight reins of secrecy
    • Via the extensive budgets at their command, they can purchase the support of tame politicians
    • As such, they form what might appear to be an irresistible force. And as such, they will need to be challenged by an equally strong counterforce
  • That counterforce is politics – or, better said, democratic politics
    • History teaches us that governments can, on occasion, build sufficient public support to impose a change of direction on major corporations
    • For example, anti-trust legislation in the US from the 1890s onward trimmed the power of large conglomerates or cartels in railways, steel, tobacco, oil, and telecommunications, helping to prevent monopoly abuse
    • Other legislation restricted widespread fraudulent or unsafe practice in fields such as food preparation and the distribution of supposed medicines (which were often “snake oil”)
  • Of course, just as there can be serious anti-social consequences of over-powerful corporations, there can be serious anti-social consequences of over-powerful politicians
    • Just as there are well-known failure modes of free markets, there are well-known failure modes of political excess
    • Just as corporations need to remain under the watchful eye of society as a whole, the political framework also needs to be watched carefully by society
  • That’s why the counterforce to dominant corporations should be, not just politics, but democratic politics – politics that (when it works well) responds quickly to the needs and insights of the entire population
    • That’s the kind of politics that Future Surge is working to encourage and enable
  • Moreover, just as the content of the Singularity Principles needs to be subject to revision following public debate, the corresponding legal statutes likewise need to be subject to prompt revision, whenever it becomes clear, following appropriate public review, that they need amending
    • In other words, the legal frameworks need to combine both strength and adaptability
  • None of this will be easy: it will require high calibre politics to ensure it works well; it will also require high calibre geopolitics, to ensure a suitable level playing field on the international stage

Blog at WordPress.com.