Since at least the 1970s there have been major advances in computer technology. Initially, the change was the ability to digitize things. As the speed and capacity of computers increased, the world became more digitized, and the economy was transformed. In 1983, TIME magazine referred to this change as the “New Economy” (Alexander, Bolt, and Zagorin 1983), contrasting the traditional smokestack industries with the newer high-technology (digital) industries and production floors filled with robots.
The TIME magazine article discusses the upheaval in the labor market as jobs in some industries and occupations fell and others increased. It notes the debate over whether the New Economy was going to result in a net increase or decrease in jobs. The expectation was that economic productivity would increase substantially, but there is some question as to whether that happened.
In 1988, the Internet opened for commercial use, which transformed communication. As the internet becomes more widely available, and with the decreasing cost of connecting devices, more devices are being created with built-in Wi-Fi capabilities and sensors. More recently, the increases in computer capacity and speed led to an explosion of networks of connected devices, including personal electronics, sensors and networks, which is referred to as the Internet of Things (IoT).
The IoT is made up of devices that are networked or electronically connected. IoT makes use of computer networks to allow, for example, distanced-based medical exams or ordering a pizza using a smart speaker. It is not feasible to list all of the ways things are or could be networked, but consider the following. “Say for example you are on your way to a meeting; your car could have access to your calendar and already know the best route to take. If the traffic is heavy your car might send a text to the other party notifying them that you will be late. What if your alarm clock wakes up you at 6 a.m. and then notifies your coffee maker to start brewing coffee for you? What if your office equipment knew when it was running low on supplies and automatically re-ordered more?” (Morgan 2014). The adoption of fifth-generation (5G) networks will significantly expand the ability to network devices and to meet the demands of increasing data intensive applications.
More recently, we have seen the beginning of another round of major technological change given the massive improvements in computer capacity that can store and process big data, in computer speed, and in analytic techniques, i.e., machine learning. Storage capacity has expanded due to advances in cloud technology. This has led to major advances in artificial intelligence (AI). For good discussions of artificial intelligence, including a new class of robots, see Ford (2015).
There are three different concepts of AI: automation, prediction, and true artificial general intelligence. Automation is associated with robotics and the production of goods and services. Robots, which were originally machines programmed to do a repetitive task, such as assemble a car, now can learn how to, for example, deliver mail in an office.
Regarding the second concept, AI is seen as essentially a tool for prediction, in which the computer system “learns” as it compiles massive amounts of data and then makes decisions. For example, a computer is shown a picture and predicts whether it is a picture of a cat. Chatbots are used to respond to support inquiries and increase customer engagement. An autonomous car “sees” something in front of it, and decides whether it something it needs to stop for. The limitation is that the car may not recognize a person walking a bike as a pedestrian, and thus does not predict that it should stop.
Regarding artificial general intelligence, Kahneman (forthcoming) has suggested there is very little we do that, eventually, computers will not be able to do. This concept is not specifically discussed in this report, but the implications are important for the work we do in the Andrew Young School of Policy Studies.
Horrigan (2013) defines “big data” as “non-sampled data, characterized by the creation of databases from electronic sources whose primary purpose is something other than statistical inference.” Gandomi and Haider (2015) identify three features of big data: large volume (over one terabyte), unstructured, and generated at high frequency. Another characteristic of big data sets is that they are so large and complex that it becomes difficult to process the data using existing database management tools and traditional data processing applications (González-Bailón 2013; Schadt, Linderman, Sorenson, Lee, and Nolan 2010). Big data sets are not the same as large administrative data sets, but both are important.
The availability of big data sets (and data mining technologies) play an important role in machine learning; they are necessary for computers to “learn.” Thus, the increasing availability of big data sets is an important factor in the development of the new technology. Machine learning and big data sets are both required for AI.
The availability of administrative data sets has been an important development in conducting research. The availability of big data sets has the same potential. However, there are barriers to their use, including the required computer capacity, the complexity and non-representativeness of the data, and privacy issues. Data can come in many forms beside numbers, including text, pictures, and even video.
Desouza and Jacob (2017) note that there is “an increasingly popular perception that Big Data holds vast potential for improving the decision-making process of both public and private organizations.” (p.1044) While it is generally believed that the use of big data can improve public sector outcomes, namely, policies and programs, Desouza and Jacob note that the literature is unsettled, and thus that there is a great deal of work that should be done exploring the question of whether big data can enhance public policymaking.
It is important to keep in mind that these drivers are impacting change in our world while at the same time other, and perhaps related, changes are occurring, including:
- Racial/ethnic/gender/age/income inequality
- Demographic changes such as increasing diversity and aging of the population
- Slow and rapid-onset climatological disasters
- Political polarization
- Shifting of responsibility for service delivery from the public to the private sector
- Advances in genetics and biotechnology
- Gig economy/contingent employee-employer relationships
While these forces are also driving change, the focus of the Policy in the Digital World project is the implications of new technology (digitization, IoT, machine learning/AI, and big data). But the implications of new technology cannot always be divorced from these other forces.
An alternative to focusing on the impacts of new technology or other forces is to focus on the transformation of the current economy and society to a positive economy/society future (SDG 8). These organized efforts go by various names, including The Circular Economy, Commons Economy, Degrowth, Doughnut Economy, Equitable Economy, Low-carbon Economy, Participatory Economy, Sustainable Economy, and many others. See, for example, Alperovitz (2011), or Raworth (2017) on the “Doughnut Economy,” or a discussion of it by the World Economic Forum.
Similarly, the Grand Challenges for Social Work represent “a dynamic social agenda, focused on improving individual and family well-being, strengthening the social fabric, and helping create a more just society.” The agenda’s 12 challenges include: ensure healthy development for all youth; stop family violence; promote smart decarceration; and reduce extreme economic inequality. Potentially, AI could be used to address these challenges.
Ultimately, the Andrew Young School’s role in the digital world may incorporate research and teaching to identify impacts of the digital world as well as to identify a sustainable future in this digital world.
Return to the overview of the “Identifying the Landscape of New Technology” report.
Proceed to the Applications section of the “Identifying the Landscape of New Technology” report.
 Meserole (2018), provides a nontechnical explanation of how machine learning works and a discussion of some applications, and some predictions regarding the future of machine learning.
 The term “artificial intelligence” was coined by John McCarthy in 1955 at a workshop at Dartmouth College.
 Agrawal, Gans and Goldfarb (2018) discuss several issues associated with prediction, such as what kinds of decision machine predictions differ from human decisions, including the factors that go into the decision, like ethical issues.
 For a discussion of big data definitions and features and the need for analytics to leverage big data.
 Tambe and Rice (forthcoming) discuss several ways that artificial intelligence is being used to address social problems.