Portuguese Technology Minister Heitor on technology governance in a digital age




Understanding “human agency” and the need to guarantee responsible, people-centered and climate-aware systems for our common good in a decentralized and AI-driven digital age

Manuel Heitor, Minister for Science, Technology and Higher Education, Portugal

Background note prepared for the McCourt Institute inaugural event – The Future of Tech Governance Paris, March 9 & 10, 2022


Evolving forms of technology governance, including the regulation of digital platforms and digital standards, should be oriented to promote “digital humanism” and guarantee a transdisciplinary approach to collective behaviors and the consideration of “human agency”. They should ensure that citizens, at large, have better knowledge of digital services and digital providers, together with improved user responsibility in an emerging decentralized digital age and AI-enabled innovations. Although most of the current debate is dominated by new technological advancements of products and services in the financial industry (i.e., Fintech), as well as related issues associated with blockchain in the context of cryptocurrencies, the acceleration of decentralization and AI affects a quite diversified set of actors and sectors of activity and all of our daily life, from industry and critical infrastructures to the arts (e.g., NFTs, non-fungible tokens).

We focus this note on the need to guarantee our collective responsibility towards carbon neutrality, avoiding a climate disaster, as well as promoting our global safety. This requires new research of public interest but, above all, these issues should contribute definitely to technology governance of decentralized digital networks and an increasingly massified use of AI. A few case studies are provided, including sustainable land management for carbon neutrality, the preservation of coastal areas and the protection of space assets in the era of “New Space”.

Empowering users and citizens, at large, will promote the need to educate and train every single citizen, while ultimately avoiding dominant economic or political interests, as well as digital terrorism and related individual malfunctions. The rules of governance must boost research and innovation, foster growth and competitiveness and help smaller companies and start-ups to compete with very large players, in particular those who have the ability to copy their features, acquire them or block their business. New governance models must facilitate access and use of data by consumers, while providing incentives for them to invest in ways to generate value through data in association with “human agency”. It includes the combination of anonymized data from different sources to produce new and valuable insights and services. In addition, rules should evolve in a way to fight against “mendacity” and, in contrast, to foster “fact checking”. Also, to promote safeguard situations of illegal transfer of data without notification, for example by the “cloud” service provider without traceability, while promoting the development of interoperability standards so that data is reused across sectors.


Global digital platforms have become an integral part of our lives, with evident benefits, but also with emerging threats to democracy, fundamental rights, societies and the economy including raising inequalities. It is under this context that emerging decentralized digital platforms bring new collective challenges and opportunities across all sectors of activity and all our daily life. Although most of the debate is dominated by new technological advancements of products and services in the financial industry (i.e., Fintech) and related issues associated with blockchain in the context of cryptocurrencies, it covers a wide range of activities from industry to the arts (e.g., NFTs, non-fungible tokens).

Overall, the emerging uncertainties associated with lack of regulation resulted from a few dominant economic or political interests, as well as digital terrorism and related individual malfunctions, disregarding people at large and, above all, our collective behaviors (e.g., Bak-Coleman et al, 2021). We argue that the role and scope of regulation must be revisited, considering an increasingly complex global network of actors and sophistication of AI algorithms. The role of regulators must be reshaped towards a time-sensitive, people-centered and climate-aware approach for our common good, in order to better protect citizens from abuses and manipulation.

In other words and following Helga Novotny (2021), they should be oriented to promote “digital humanism” and guarantee a transdisciplinary approach to collective behaviors and consideration of “human agency” (UNDP, 2019; chapter 6).

I should note that recent unexpected threats to our common safety and public health, such as the Covid-19 pandemic, the increasing activity of individual digital terrorism or the Russian invasion of Ukraine, have shown that our societies are not as safe as we thought. Any deep reflection on these issues must lead us to safer, more resilient forms of digital governance that must necessarily be centered on people and based on collective knowledge. The resilience provided by decentralized digital networks and the power of machine learning techniques and Artificial Intelligence algorithms combined with data science offer us new tools to define future technological governance.

In addition, recent analysis has shown that moving from the current version of the internet most of us know today, with an internet dominated by companies that provide services in exchange for our personal data (i.e., “Web2”, including large firms such as Google, Facebook, Amazon, Airbnb, Uber, among others) to a context of decentralized apps that run using blockchain (i.e., “Web3”, including Bitcoin, as a digital currency, and decentralized applications built on top of networks, such as Ethereum, as well as Helium, Maker and Ocean) brings with it the potential advantage, among others, of no one being blocked or having access denied to the service (at least in theory). In addition to blockchain, the emerging next-generation world wide web will leverage machine learning and artificial intelligence even more to achieve real-world human communications and transactions.

But Web3 has some limitations, at least for the moment, including: i) scalability, because transactions are still slower than traditional ones; ii) larger “time to value” than incumbent technologies, because of the need for extra steps, new software, and, above all, education and further research; iii) accessibility, due to the current lack of integration in modern web browsers; and iv) cost, because the dominant blockchain technologies (i.e., “proof of work”) are still expensive in terms of energy consumption and negative climate impact (e.g., Sam Richards, 2021). In particular, increased energy consumption and the inherent CO2 footstep is a clear blockchain drawback that cannot be forgotten. In addition, decentralized and distributed systems are prone to hacker attacks, particularly associated with critical infrastructures (like electric grid, water distribution, financial networks, gas pipeline infrastructures).

Although advantages and disadvantages of centralized and decentralized digital networks are still subject to many uncertainties and require comprehensive technical and policy debates, it is clear that decentralization and blockchain control is not completely immune to biases – blockchain algorithms incentivize and ultimately end up giving preference to participants that have access to more nodes, therefore, to the most active ones. Artificial Intelligence (AI) can help by modeling the information flows and learning the critical patterns of use by different participants. Such patterns can then provide input to the setting of the parameters that govern the behavior of blockchain algorithms.

To help understanding the issues in terms of our common public goods, the following paragraphs addresses the issues of the need to promote research of public interest and, then, focus on the need to guarantee carbon neutrality, address the disastrous effects of climate change (e.g., Scuri et al, 2022), as well as promote our global safety. This is what we believe must drive technology governance in an emerging decentralized digital age. A few case studies are provided, including sustainable land management for carbon neutrality, the preservation of coastal areas and the protection of space assets in the era of “New Space”.

Fostering research of public interest

It is becoming well known that the virtuous combination of advances in AI and blockchain can lead to a better governed digital age that achieves a higher level of common good than would otherwise be possible. It is a basic fact that data science and AI have been changing our lives for the past few years and the revolution they are provoking tends to evolve exponentially. Some sixty years after the first scientific papers on AI have been published, we now see numerous knowledge-intensive business services being developed and deployed at fast pace. And this is not limited to the private sector, with the digital transformation of the public sector is also ramping up at unprecedent levels.

Examples include data handling and analysis in public health, land register and sustainable land management for fire prevention, biodiversity management, protection of space assets, data analysis for consumer protection, or accident and disaster prevention, among many other areas of critical relevance in the public domain and public-private interactions. On the one hand, states as political authorities are designing policies and regulations to protect citizens from AI-related harms and risks, whilst on the other hand public administrations are showing a clear interest in using AI-enabled systems and technologies in order to improve their processes, services and policies. It is under this context that the experience of the Portuguese Initiative on Digital Skills, INCODE.2030 ( ), has shown that it is becoming critically relevant the need to foster research of public interest among the AI research communities in close cooperation with public administration.

However, the massified use of AI-enabled innovations is also not free of additional questions because the “power it has to make us act in the ways it predicts, reduces our agency over the future” (Novotny, 2021). In predicting our behavior, AI systems can end up changing it. Consequently, collective human wisdom need to be strengthen in a way that emerging regulatory issues for a decentralized digital age should help promoting critical approaches to AI, with clear accountability and clarity about boundaries and purpose, as well as responsibility (e.g., Thelisson et al, 2019). Requires rethinking of the techno-centric narrative of progress, embracing and harnessing uncertainty, as well as abandoning the fantasy of control over nature and the illusion of techno- centric dominance of AI-enable innovations (Gill, 2022).

The issue is clear in that it creates tensions between developers/promoters and human-led policy making, which need to be informed by negotiations of trade-offs. Above all, it requires a transdisciplinary approach to collective behaviors and consideration of “human agency” across economics, philosophy, law, science and technology studies, history and sociology to engage with the all necessary ingredients of an emerging decentralized digital age and AI-enabled innovations.

Understanding knowledge as our common public good will allow citizens to be an integral part and a key stakeholder of future developments and will drive policy-makers to better understand how decentralized digital networks and AI can be used and further developed to make public services more effective and deliver seamless services, cutting down on digital bureaucracy and giving citizens back their most precious asset, namely their time. In addition, it will drive new policy options targeted to enhance the governance and regulation of decentralized digital networks, including in the public sector, aimed at ensuring high standards of conduct across all areas of public sector practice, promoting public sector effectiveness and delivering better service to its users.

The key idea is that decentralized digital networks together with AI have the potential to contribute significantly to the problem solving of long-standing issues in the public sector, such as large unmanageable caseloads, administrative burdens, delays in service delivery and language barriers including automated working processes, as well as improved decision-making and service quality. For this vision to become a reality, the associated risks and challenges must be better understood, so that secure and successful implementation and application of AI can be assured at large. Ultimately, the reliance of decentralized digital networks and AI developments in terms of design, production and even management must be combined with the unshakeable commitment to uphold transparency and accountability standards in the public sector, which ultimately sustain our democratic institutions.

These challenges and associated risks will be possibly mitigated in implementations using practices, methods and tools generated by a new trend in research and innovation: that of “Responsible AI”, underscoring principles such as fairness, transparency and explainability, human-centeredness, privacy and security.

Against this background, pilot projects to foster the adoption of decentralized digital networks together with AI-enabled innovations through public-private interactions are critically relevant to support redesigning governance processes and policy-making mechanisms, as well as to improve public services delivery and achieve a responsible combination of new, large data sources with advanced machine learning algorithms.

Sample case studies on critical technology governance to foster responsible and decentralized digital networks together with AI-enabled innovations

Sustainable land management for carbon neutrality

How far decentralized digital systems and AI are able to promote carbon neutrality?

Moving towards carbon neutrality will greatly depend on the way we will be able to guarantee adequate land management, in that all relevant stakeholders have access and are equipped with adequate systems for sustainable land and integrated forest fire management. This involves civil protection services, forest guards, as well as public and private land users and actors (including municipalities and land governance institutions, as well as firms and individual landowners). In other words, the adoption of decentralized digital networks, together with AI-enabled innovations and very high resolution satellite-based Earth Observation system fully integrated with high-performance computing capacity, as well as user-friendly computer interfaces, are expected to effectively support sustainable land management. This activities are critical to avoid the emerging escalation of fire hazards at unprecedent levels and should be regulated in a way enabling that actions undertaken by landowners contribute for the common good and guarantee fire prevention.

Such efforts have been promoted by the Portuguese Space Agency (PT Space) and the Portuguese Agency for Integrated Fire Management (AGIF) in close cooperation with other European authorities in and use planning, forest management, fire prevention and land register in order to contribute to:

  1. monitoring forest biodiversity and decreasing the likelihood of extreme and severe fire events;
  2. support forest fire risk governance and management mechanisms to minimise the impact of forest fires in by half until 2030;
  3. monitor fuel management efforts in wildland/rural-urban interfaces, as well as in forest areas, undertake risk assessment and support real-time fire risk monitoring and exposure of highly sensitive areas; and
  4. support law enforcement operations towards the compliance of fuel management regulation around building and critical infrastructures and support a flexible tasking of surveillance and suppression resources considering risk and uncertainty, while encapsulating intra-spatial and temporal variability.

Understanding the triangulation of new knowledge, institutional innovation and new observation methods will be critically relevant because:

  • Forests, shrubland and pastures can play different roles in the carbon cycle, from net emitters to net sinks of carbon, because forests sequester carbon by capturing carbon dioxide from the atmosphere and transforming it into biomass through photosynthesis. Sequestered carbon is then accumulated in leaf’s, branch’s, trunks and roots in (biomass) deadwood, litter and in forest soils. The release of carbon from forest ecosystems results from natural processes (respiration and oxidation) and deliberate or unintended results of human activities (i.e., harvesting, fires, deforestation, soil mobilization);
  • Forests, shrubland and pastures and their role in the carbon cycle are affected by changing climatic conditions. Evolutions in rainfall and temperature can have either damaging or beneficial impacts on forest health and productivity, which are very complex to predict. Depending on circumstances, climate change will either reduce or increase carbon sequestration into forests, which causes uncertainty about the extent to which forests are able to contribute to climate change mitigation in the long term. Also, forest management activities have the potential to influence carbon sequestration by stimulating certain processes and mitigating impacts of negative factors;
  • Forests, shrubland, pastures and natural lands ecosystems in the European Union, for example, play multiple significant roles, including carbon sequestration. It is estimated that the forest biomass in the EU27 countries contains 9.8 billion tons of carbon (tC). The total CO2 emissions of the EU27 countries in 2004 was 1.4 billion tons of carbon equivalent. This means that the amount of carbon emitted every year by the EU27 equals to nearly one-seventh of the carbon stored in the EU27 forests. As a result, the value placed on forests in the EU can be seen as a viable way of mitigating GHG emissions through carbon sinks and sequestration. Overall, improved public services coupled with public-private interactions on sustainable land management depend on a responsible combination of new, large data sources with advanced machine learning algorithms oriented to:
    1. assure the monitoring of CO2 sequestrated in soil and vegetation through a very high resolution database, aiming for a sustainable forest by contributing for an effective 55% reduction of CO2 emissions by 2030 and full carbon neutrality in 2050; and
    2. promote a new market for very high resolution (i.e. submetric) satellite-based Earth Observation systems fully integrated with advanced Information systems, through revised legal systems imposing that all municipalities and land governance institutions, as well as firms and individual land owners, are properly equipped with high resolution, space-based fire prevention and sustainable land management tools.

One of the benefits of using decentralized systems would be to enable new business models that make it attractive for firms and land owners to participate in government-led land management efforts. But these goals can only be achieved through a concerted action oriented to promote:

  • The development and deployment of tools for the monitoring of forest-induced carbon credits and their corresponding monetization;
  • Advanced decentralized digital networks, information systems and Artificial Intelligence methodologies: on- line forecast/AI modelling of “fire risk level” with a capacity of 90% accuracy prediction over 3 days in advance and the necessary release of early warnings, together with on-line capacity for dynamic forecast of carbon cycle and the prediction of levels of carbon stock and sequestration into forests; Machine learning algorithms crossing information from different sources and types, to accelerate land identification;
  • High performance computing capacity: capacity for near real time weather forecast and massive calculations of soil parameters and levels of carbon sequestration;
  • Providing users with a decision support system that, through probabilistic risk modelling and scenario planning trade-off analysis, using the best available information (e.g., submetric resolution and near real- time data of weather conditions and land use and landcover) and processing capacity, allows practitioners to prioritize investment decisions regarding landscape planning and fuel management at national, regional and sub-regional scale. The core value of such a decision support tool, resides in using a quantitative wildfire exposure assessment to map, compare, and inform management priorities in vast areas;
  • Interoperability platforms that enable better land management by providing information from land owners but also from central administration (fostering the “once only” principle) and municipalities.

Preserving coastal areas

How far decentralized systems and AI are able to create and promote new decentralized markets for ocean information and ocean goods (including those beyond fisheries and tourism), as well the preservation of coastal areas?

This issue is critically relevant because Europe has the world’s most extended continental length of coastal areas, which provide accommodation and living conditions of about 40% of the European population (i.e., over 160 million people in European coastal areas). It is in this context that the Atlantic International Research Center (AIR Centre; is promoting an open observation platform for new business development making use of an integrated advanced information system and a decentralized digital network, including a dedicated low-orbit satellite constellation, different types of in-situ sensors and new AI-based functions. It is a sustainable way to provide capabilities contributing to the socio-economic development of a sustainable European “Blue economy”.

It is designed as a user-oriented, research-driven platform to bring together key strategic partners operating in the Atlantic, the Artic, the Baltic and the Mediterranean to guarantee a deep understanding of space-climate- oceans interactions through a new generation of optical sensors with a spatial resolution better than 1 m and, in the near future, down to 1 cm with an adequate temporal resolution based on fully continuous, on-line images.

Further, a Portuguese lead consortium for one of the five New European Bauhaus lighthouse projects is promoting the “Bauhaus of the Seas” as a new aesthetical movement towards implementing the Green Deal based on sustainability, social inclusion and beauty. The Bauhaus of the Seas aims to promote renewed ethical, economic, cultural, spatial and aesthetic regenerative development from a widely diverse range of dimensions of our relationship with the water bodies. It will convey a space of encounter to design future ways of living in coastal areas that regenerate these recognized ecosystems, crucial to fighting climate change. Situated at the crossroads between art, culture, inclusion, citizenship, science, and technology, it calls for a collective effort to imagine and build a sustainable and inclusive future.

Knowledge of seas and oceans is proving increasingly important in solving key problems in climate modelling and future climate predictions. Environmental protection and climate adaptation and resilience are already strategic topics for coastal areas maritime regions. The use of space technology, in particular all types of Earth observation data and applications, is an essential element for the development of activities and research in this area. From an environmental point of view, oceans and seas are key for monitoring and understanding climate change, while their balance is under significant threat from pollution.

From an economic point of view, the relevance of emerging forms of the “blue economy” is directly linked to their impact on the transportation, energy and food sectors. As a source of limited (and often threatened) resources subject to exploration interests from many parties, a fair allocation and oversight can be achieved with proper monitoring and inspection mechanisms in place. Tourism related to maritime activities is also expected to grow with the challenge of keeping it eco-friendly.

From a safety and security point of view autonomous shipping, piracy and smuggling are all elements of importance, as well as system alerts for threats such as tsunamis and other extreme weather conditions that can pose danger to coastlines and recreational navigation. For the latter, services associated with Search and Rescue are expected to have a higher demand and include more sophistication and inter-operability between space and terrestrial/maritime means.

Coastal zones are among the most productive areas on Earth, providing a wide range of valuable ecosystem services to populations and wildlife. These areas are being severely threatened by both anthropogenic impacts (e.g. pollution, physical changes, loss of habitats, urban sprawl) and environmental changes (e.g. sea-level rise, water temperature increase, coastal erosion, ocean acidification).

The total value of the services produced by marine and coastal ecosystems is valued at USD$ 29.5 trillion per year. But ocean and sea health are much more than wealth.

Earth Observation data from existing satellites is used today to partially mitigate all challenges to preserve our ocean and coastal areas. Very large US satellites from NOAA and NASA, together with the Sentinel satellites of the European program Copernicus are delivering valuable data to deal with environmental changes and anthropogenic impacts. However, there are two emerging needs regarding the modernization of the existing satellite fleet: i) data with a higher spatial resolution is required; and ii) coastal and ocean data requires with a higher frequency.

Through the AIR Centre these needs are being addressed as follows: i) very high resolution data is covered by the new Portuguese satellite operator GEOSAT, providing data down to 75 cm resolution with the GEOSAT 2 satellite; and ii) a new Atlantic Constellation of 16 microsatellites is being designed and promoted to provide coastal and ocean data every 2-3 hours for physical phenomena that requires such a frequency (e.g., tsunami alert or algae bloom detection). This decentralized network, combining different set of satellites with in-situ sensors (buoys, gliders, CTD ́s), AI algorithms and data science is the base of the AIR Centre Earth Observation Laboratory at Terceira Island (Azores).

A consideration and critical appraisal of these challenges and risks, which can be technological implementation- related, legal-regulatory, ethical, and social in nature, is mandatory for a responsible, secure, and expedient approach to AI. Overcoming these challenges and risks will be decisive for the future success and acceptance of AI in the public sector and society.

Protection of space assets in the era of orbital space economy and in-orbit servicing as well as the advent of “autonomous distributed Space infrastructures”

Which are the consequences of the changing economic environment of space activities towards a new era of orbital space economy and in-orbit servicing (i.e., “New Space”)?

This question is driven by the substantial increase in the number of satellites in orbit, notably with the development of so-called mega-constellations. The cost of sending satellites into space is continuously decreasing, notably due to new production approaches, the use of re-usable launchers and the development of micro-launchers. At the same time, the development of small satellites is lowering the price tag to take payloads into space. This has attracted venture capital investment given that the potential return on investment is growing.

Since the beginning of the space race, about 6,000 launches have put in orbit 11,800 satellites of which 4,550 are currently operational1. It is estimated that more than 20,000 additional satellites will be launched in the next ten years2. This growing number of satellites increases the complexity of space operations and makes it impossible to safely operate a spacecraft not taking into account other spacecraft.

In addition, the rise in the number of satellites and space traffic activity will inevitably increase the volume of debris generated and the risk of collisions. Today there are around 128 million pieces of debris smaller than 1cm orbiting Earth, and approximately 900,000 pieces between 1 and 10cm. The current count of large debris (defined as 10cm or larger) is 34,0003. In recent years this problem has been especially aggravated by deliberate collisions by China and Russia, each of which generated more than 2,000 large objects and tens of thousands of smaller fragments.

Following the recently adopted Agenda 2025 of the European Space Agency (ESA) through the “Matosinhos Manifesto” (November 2021) and the European Commission Communication of February 15, 2022, Space is clearly increasingly contested, threatening the security and resilience of space assets and highlighting the urgent need for international discussions to agree on and implement norms of responsible behaviour in outer space by state and non-state actors. Particularly in the area of outer space around Earth that includes all orbits below 2,000km (i.e., Low Earth Orbit – LEO), and is the home of the International Space Station and of thousands of other satellites, is rapidly becoming a hazardous area due to space debris and inoperable spacecraft orbiting at very high speeds.

The protection of active satellites requires the execution of evasion maneuvers with a certain frequency to avoid collisions with active satellites and debris objects. This forces satellite operators to dedicate significant resources in their operational teams to detect and avoid these collisions and also space mission designers to incorporate additional fuel for this type of maneuvers with the corresponding increase in weight and cost of the missions.

The danger of this situation is not limited to assets in space, assets on Earth and people are increasingly at risk. Despite the fact that to this day no death has been reported from falling debris from space, and that most objects in space disintegrate upon entering the Earth’s atmosphere, there are a number of objects that manage to reach Earth and pose an obvious risk to citizens. These objects are the large space stations (Skylab, MIR, International Space Station), the large satellites (Envisat, Hubble Telescope, …), the space systems with refractory components (some optical systems for astronomical satellites) and especially the satellites sent with nuclear energy. Unfortunately there are still several satellites in Earth orbit that were sent with nuclear power. These satellites do not disintegrate on their re-entry into the atmosphere and reach the earth’s surface creating “radioactive rain” with obvious danger to the population. The fact that the Earth’s surface is mostly ocean helps mitigate this problem, but it has not prevented a nuclear accident from occurring in northern Canada in January 1978, with the fall of one of these satellites (Cosmos 954), in this case launched by the Soviet Union.

The scope of this problem may go beyond protecting assets in space or on Earth, the greatest risk is defined as Kessler syndrome. The Kessler syndrome, also called the collisional cascading, was proposed by NASA scientist Donald J. Kessler in 1978, it is a scenario in which the density of objects in low Earth orbit (LEO) due to space pollution is high enough that collisions between objects could cause a cascade in which each collision generates space debris that increases the likelihood of further collisions. In this scenario, new space debris from collisions growth much faster than the atmospheric drag remove them, leading to a situation where space activities and the use of satellites in specific orbital ranges is impossible for many generations.

It is in this context that the development and deployment of automatic collision-avoidance services and the introduction of a distributed autonomous infrastructure in space is a must to cope with the increased number of space objects, but this will only be feasilble through decentralized digital networks and AI-enabled innovations, as well as public-private interactions.

The adoption of international governance rules is of paramount importance to ensure the future use of space for the benefit of humankind. This requires the development of a decentralized network of sensors (optical telescopes and radars) geographically distributed along the earth surface and the application of AI-based algorithms to provide key services (collision avoidance, re-entry alert and fragmentation and collision detection) and to support policy makers to define the rules for the technological governance of this serious problem.

Discussion: “time to value” and the steps towards “smart regulation” of people-centered, climate-aware decentralized digital networks

We all know that digital systems are undergoing a rapid transformation and expansion on a global scale in association with new business models and new players, including emerging new relationships with institutional sectors and a broad range of entrepreneurial activities in several technical fields associated with decentralized digital networks.

At the same time, citizens at large face increasing challenges and their quality of life and sustainable future can only be secured effectively through a new generation of user-driven technology systems, making citizens an integral part and key staheholders of future developments.

It is in this context that we focus this brief note on the need to guarantee carbon neutrality, addressing the impending climate disaster, as well as our global safety, this being the central endeavour which should drive technology governance in a digital age. Advanced Earth Observation methods and space-related communication and autonomous systems are part of this overall challenge. They represent an emerging opportunity for the adoption of decentralized digital networks together with AI-enabled innovations through public-private interactions.

Our argument is that all this requires a revisted “digital humanism”, together with rethinking of techno-centric narratives of progress, embracing and harnessing uncertainty, as well as abandoning the fantasy of control over nature and the illusion of techno-centric dominance of digital systems and AI-enable innovations.

To save lives, predict natural disasters, prevent fires, control erosion of coastal areas, as well as providing quality food and services for all, can only be secured effectively through a new generation of user-driven, low-cost, space-based observation and human-based participatory systems, which require adequate resources that can only be obtained if citizens become an integral part of future developments. In addition, dealing with climate change, dramatic biodiversity reduction, health and economic crisis, uncertainty and risks, together with ensuring security and safe conditions for our populations can only be addressed if new digital initiatives move forward in full alignment with a required green transition.

In addition, the critical point to note is that developing decentralized digital networks together with space- based systems and AI-enabled innovations for safety and security requires consideration of adequate “time to value”, because it requires extra steps, software, and, above all, education and research. In particular, their successful implementation may depend on the following far-reaching ambitious steps: i) Pursuing quantum technology and in-orbit demonstration and validation activities for upgrading Earth Observation, science and navigation data and services; ii) setting-up an optical communication network in space, seeking to do so in partnership with terrestrial network providers to setup an integrated network. This “network” should extend beyond low Earth orbits by establishing an interplanetary internet; iii) A collective effort for the responsible use of space is mandatory by all space actors to ensure safe access to space.

Overall, evolving forms of technology governance and the introduction of digital standards should be oriented to guarantee improved collective user responsibility in an emerging decentralized digital age boosted by AI- enabled innovations. Promoting “Human Agency” and empowering people and users at large will promote the need to educate and train every single user and this can only be achieved by boosting research and innovation, growth and competitiveness. It should include smaller companies and start-ups, stimulating forms of free and open competition with very large players. In addition, technology governance should facilitate access and use of data by consumers, while providing incentives for them to invest in ways to generate value through data, as well as to safeguard situations of illegal transfer of data and to fight against mendacity (e.g., Jay, 2010).

To conclude, emerging regulatory issues and related forms of “smart regulation” for a decentralized digital age should help promoting critical approaches to decentralized digital networks and AI, with clear accountability and clarity about boundaries and purpose, together with individual responsibility. Collective human wisdom need to be strengthen, embracing and harnessing uncertainty, as well as abandoning the fantasy of control over nature and the illusion of techno-centric dominance in an emerging decentralized digital age.

Increasing tensions between developers/promoters and human-led policy making requires negotiations of trade-offs, which can only be considered through a transdisciplinary approach to collective behaviors and consideration of “human agency” across economics, philosophy, law, science and technology studies, history and sociology to engage with the all necessary ingredients of an emerging decentralized digital age driven by AI-enabled innovations.


I would like to thank all of those that have contributed for this note and for many inspiring discussions about emerging issues associated with technology governance and new space-based digital networks, as well as for detailed comments on the manuscript, including Pedro Conceição (UNDP), Maria Manuel Leitão Marques (European Parliament), Jose Moura and Pedro Ferreira (Carnegie Mellon University), Miguel Bello (AIR centre), Ricardo Conde and Hugo Costa (Portuguese Space Space), Tiago Oliveira (AGIF), Nuno Nunes (LARSyS, Instituto Superior Tecnico), Jose Manuel Mendonça and Pedro G. Oliveira (INESC TEC, University of Porto), João Barros (Veniam), Chiara Manfletti (NeuraSpace), Nuno Sebastião (Feedzai), Manuela Veloso (JP Morgan), Rogério Carapuça (APDC), Rodrigo Costa (REN).


  1. Eurospace. More than 470 spacecraft were launched every year in 2017, 2018 and 2019, while only 110 spacecraft were launched on average per year between 2000 and 2013
  2. An indicative list: Space X Starlink, Amazon Kuiper, the success of One Web, Boeing V-band, Iceye, Kepler, Telesat LEO, Spire, Theia, etc.
  3. ESA

Sample references

  • Karamjit S. Gill (2022), Book review, “Nowotny 2021: In AI we trust”, AI& Society, January 2021.
  • Scuri, S., Ferreira, M., Nunes, N., Nisi, V. and Mulligan, C. (2022), “Hitting the riple Bottom Line – Widening the HCI approcah to sustainailbilty”, CHI`22, April 29-May 5, 2022; New orleans, USA.
  • Joseph B. Bak-Coleman, Mark Alfano, Wofram Barfuss, Carl T. Bergstrom, MIgue Centeno, Iain D. Couzin, Jonathan F. Donges, Mirta GAlesic, Andew S Gersick, Jennifer Jacquet, Albert B Kao, Rachel E. Moran, Pawel ROmamnczuk, Daniel I. Rubenstaein, Kaia J Tombak, Jay J Van BAvel and Elke U weber (2021), “Stewardship of global collective behavior”, PNAS, June 21, 2021.
  • Reema Patel (2021), “Reboot AI with human values”, Nature, 598, pp27-28, October 2021.
  • Helga Nowotny (2021), “In AI we Trust: power, Illusion and Control of predictive algorithms”, Polity Books.
  • Sam Richards (2021), “Web2 VS Web3”,
  • Preethi Kasireddy (2021), “The Architecture of a Web 3.0 application”.
  • Helga Nowotny (2020), “Life in the Digital Time Machine”, The Wittrock Lecture Book Series, N. 11, Swedish Collegium for Advanced Study (SCAS).
  • Max Mersch and Richard Muirhead (2019), “What Is Web 3.0 & Why It Matters”, 31 December 2019,
  • San Murugesan (2019), “As the Web celebrates its 30th Anniversary, what should its future be?”, IEEE Spectrum, 22 March 2019, what-should-its-future-be.
  • Thelisson, E., Morin, J.-H., Rochel, J. (2019), “AI Governance: digital resposability as a building block”, 2 DELPHI 167.
  • UNDP (2019), “The Human Developmennt Report”, UNDP, New York.
  • Chris Dixon (2018), “Why Decentralization Matters” Feb 18, 2018.
  • Vitalik Buterin (2017), “The Meaning of Decentralization” Feb 6, 2017.
  • Joydeep Bhattacharya, “What is Web3.0?”,
  • Werner Vermaak, “What is Web3?”,
  • Jay, May (2010), “The virtues of mendacity on lying in politics”, University of Virginia Press.