Connect with us

Uncategorized

Five ways to make AI a greater force for good in 2021

Published

on

A year ago, none the wiser about what 2020 would bring, I reflected on the pivotal moment that the AI community was in. 2018 had seen a series of high-profile automated failures, like self-driving car crashes and discriminatory recruiting tools. In 2019, the field responded with more talk of AI ethics than ever before. But talk, I said, was not enough. We needed to take tangible actions. Two months later, the coronavirus shut down the world.

In our new socially-distanced, remote everything reality, these conversations about algorithmic harms suddenly came to a head. Systems that had been at the fringe, like HireVue’s face-scanning algorithms and workplace surveillance tools, were going mainstream. Others, like to monitor and evaluate students, were spinning up in real-time. In August, after a spectacular failure of the UK government to replace in-person exams with an algorithm for university admissions, hundreds of students gathered in London to chant, “Fuck the algorithm.” “This is becoming the battle cry of 2020,” tweeted AI accountability researcher Deb Raji, when a Stanford protestor yelled it again for a different debacle a few months later.

At the same time, there was indeed more action. In one major victory, Amazon, Microsoft, and IBM banned or suspended their sale of face recognition to law enforcement, after the killing of George Floyd spurred global protests against police brutality. It was the culmination of two years of fighting by researchers and civil rights activists to demonstrate the ineffective and discriminatory effects of the companies’ technologies. Another small yet notable change: for the first time ever, NeurIPS, one of the most prominent AI research conferences, required researchers to submit an ethics statement with their papers. 

So here we are at the start of 2021, with more public awareness of and regulatory attention on AI’s influence than ever before. My New Year’s resolution: Let’s make it count. Here are five hopes that I have for AI in the coming year.

Reduce corporate influence in research

The tech giants have disproportionate control over the direction of AI research. This has shifted the direction of the AI field as a whole toward increasingly big data and big models. There are several consequences of singularly investing in this approach. It blows up the climate impact of AI advancements, locks out resource-constrained labs from participating in the field, and leads to lazier scientific inquiry by ignoring the range of other approaches. As Google’s ousting of Timnit Gebru revealed, tech giants will readily limit the field’s ability to investigate other consequences as well.

But much of corporate influence comes down to money and the lack of alternative funding. As I wrote last year in my profile of OpenAI, the lab initially sought to rely only on independent wealthy donors. The bet proved unsustainable, and four years later, it signed an investment deal with Microsoft. My hope is we’ll see more governments step into this void to provide non-defense-related funding options for researchers. It won’t be a perfect solution, but it’ll be a start. Governments are beholden to the public, not the bottom line.

Refocus on common sense understanding

The overwhelming attention on bigger and badder models has overshadowed one of the central goals of AI research: to create intelligent machines that don’t just pattern match but actually understand meaning. While corporate influence is a major contributor to this trend, there are other culprits as well. Research conferences and peer-review publications place a heavy emphasis on achieving “state-of-the-art” results. But state of the art is often poorly measured by tests that can be beaten with more data and larger models.

It’s not that large-scale models could never reach common sense understanding. That’s still an open question. But there are other avenues of research deserving of greater investment. Some experts have placed their bets on neurosymbolic AI, which combines deep learning with symbolic knowledge systems. Others are experimenting with more probabilistic techniques that use far less data, inspired by a human child’s ability to learn with very few examples.

In 2021, I hope the field will realign its incentives to prioritize comprehension over prediction. Not only could this lead to more technically robust systems, the improvements would have major social implications as well. The susceptibility of current deep-learning systems to being fooled, for example, undermines the safety of self-driving cars and poses dangerous possibilities for autonomous weapons. The inability of systems to distinguish between correlation and causation is also at the root of algorithmic discrimination.

Empower marginalized researchers

If algorithms codify the values and perspectives of their creators, a broad cross-section of humanity should be present at the table when they are developed. I saw no better evidence of this than in December of 2019, when I attended NeurIPS. That year, it had a record number of women and minority speakers and attendees, and I could feel it tangibly shift the tenor of the proceedings. There were more talks than ever grappling with AI’s influence on society.

At the time I lauded the community for its progress. But Google’s treatment of Gebru as one of the few prominent Black women in industry showed how far there still is to go. Diversity in numbers is meaningless if those individuals aren’t empowered to bring their lived experience into their work. I’m optimistic though that the tide is changing. The flashpoint sparked by Gebru’s firing turned into a critical moment of reflection for the industry. I hope this momentum continues and converts into long-lasting, systemic change.

Center the perspectives of impacted communities

There’s also another group to bring to the table. One of the most exciting trends from last year was the emergence of participatory machine learning. It’s a provocation to reinvent the process of AI development to include those who ultimately become subject to the algorithms.

In July, the first conference workshop dedicated to this approach collected a wide range of ideas about what that could look like. It included new governance procedures for soliciting community feedback; new model auditing methods for informing and engaging the public; and proposed redesigns of AI systems to give users more control of their settings.

My hope for 2021 is to see more of these ideas trialed and adopted in earnest. Facebook is already testing out a version of this with its external oversight board. If the company follows through with allowing the board to make binding changes to the platform’s content moderation policies, the governance structure could become a feedback mechanism worthy of emulation.

Codify guardrails into regulation

Thus far grassroots efforts have led the movement to mitigate algorithmic harms and hold tech giants accountable. But it will be up to national and international regulators to set up more permanent guardrails. The good news is lawmakers around the world have been watching and are in the midst drafting legislation. In the US, Congress members have already introduced bills to address facial recognition, AI bias, and deepfakes. Several of them also sent a letter to Google in December expressing their intent to continue pursuing this regulation.

So my last hope for 2021 is that we see the passing of some of these bills. It’s time we codify what we’ve learned over the past few years, and move away from the fiction of self-regulation.

Lyron Foster is a Hawaii based African American Musician, Author, Actor, Blogger, Filmmaker, Philanthropist and Multinational Serial Tech Entrepreneur.

Continue Reading
Comments

Uncategorized

The biggest step the Biden administration took on climate yesterday wasn’t rejoining the Paris Agreement

Published

on

While the Biden Administration is being celebrated for its decision to rejoin the Paris Agreement in one of its first executive orders after President Joe Biden was sworn in, it wasn’t the biggest step the administration took to advance its climate agenda.

Instead it was a move to get to the basics of monitoring and accounting, of metrics and dashboards. While companies track their revenues and expenses and monitor for all sorts of risks, impacts from climate change and emissions aren’t tracked in the same way. Now, in the same way there are general principals for accounting for finance, there will be principals for accounting for the impact of climate through what’s called the social cost of carbon.

Among the flurry of paperwork coming from Biden’s desk were Executive Orders calling for a review of Trump era rule-making around the environment and the reinstitution of strict standards for fuel economy, methane emissions, appliance and building efficiency, and overall emissions. But even these steps are likely to pale in significance to the fifth section of the ninth executive order to be announced by the new White House.

That’s the section addressing the accounting for the benefits of reducing climate pollution. Until now, the U.S. government hasn’t had a framework for accounting for what it calls the “full costs of greenhouse gas emissions” by taking “global damages into account”.

All of this is part of a broad commitment to let data and science inform policymaking across government, according to the Biden Administration.

Biden writes:

“It is, therefore, the policy of my Administration to listen to the science; to improve public health and protect our environment; to ensure access to clean air and water; to limit exposure to dangerous chemicals and pesticides; to hold polluters accountable, including those who disproportionately harm communities of color and low-income communities; to reduce greenhouse gas emissions; to bolster resilience to the impacts of climate change; to restore and expand our national treasures and monuments; and to prioritize both environmental justice and the creation of the well-paying union jobs necessary to deliver on these goals.”

The specific section of the order addressing accounting and accountability calls for a working group to come up with three metrics: the social cost of carbon (SCC), the social cost of nitrous oxide (SCN) and the social cost of methane (SCM) that will be used to estimate the monetized damages associated with increases in greenhouse gas emissions.

As the executive order notes, “[an] accurate social cost is essential for agencies to accurately determine the social benefits of reducing greenhouse gas emissions when conducting cost-benefit analyses of regulatory and other actions.” What the Administration is doing is attempting to provide a financial figure for the damages wrought by greenhouse gas emissions in terms of rising interest rates, and the destroyed farmland and infrastructure caused by natural disasters linked to global climate change.

These kinds of benchmarks aren’t flashy, but they are concrete ways to determine accountability. That accountability will become critical as the country takes steps to meet the targets set in the Paris Agreement. It also gives companies looking to address their emissions footprints an economic framework to point to as they talk to their investors and the public.

The initiative will include top leadership like the Chair of the Council of Economic Advisers, the director of the Office of Management and Budget and the Director of the Office of Science and Technology Policy (a position that Biden elevated to a cabinet level post).

Representatives from each of the major federal agencies overseeing the economy, national health, and the environment will be members of the working group along with the representatives or the National Climate Advisor and the Director of the National Economic Council.

While the rule-making is proceeding at the federal level, some startups are already developing services to help businesses monitor their emissions output.

These are companies like CarbonChainPersefoni, and SINAI Technologies. And their work compliments non-profits like CDP, which works with companies to assess carbon emissions.

Biden’s plan will have the various agencies and departments working quickly. The administration expects an interim SCC, SCN, and SCM within the next 30 days, which agencies will use when monetizing the value of changes in greenhouse gas emissions resulting from regulations and agency actions. The President wants final metrics will be published by January of next year.

The executive order also restored protections to national parks and lands that had been opened to oil and gas exploration and commercial activity under the Trump Administration and blocked the development of the Keystone Pipeline, which would have brought oil from Canadian tar sands into and through the U.S.

“The Keystone XL pipeline disserves the U.S. national interest. The United States and the world face a climate crisis. That crisis must be met with action on a scale and at a speed commensurate with the need to avoid setting the world on a dangerous, potentially catastrophic, climate trajectory. At home, we will combat the crisis with an ambitious plan to build back better, designed to both reduce harmful emissions and create good clean-energy jobs,” according to the text of the Executive Order. “The United States must be in a position to exercise vigorous climate leadership in order to achieve a significant increase in global climate action and put the world on a sustainable climate pathway. Leaving the Key`12stone XL pipeline permit in place would not be consistent with my Administration’s economic and climate imperatives.”

Continue Reading

Uncategorized

Ars online IT roundtable today: What’s the future of the data center?

Published

on

Ars online IT roundtable today: What’s the future of the data center?

Enlarge

If you’re in IT, you probably remember the first time you walked into a real data center—not just a server closet, but an actual raised-floor data center, where the door wooshes open in a blast of cold air and noise and you’re confronted with rows and rows of racks, monolithic and gray, stuffed full of servers with cooling fans screaming and blinkenlights blinking like mad. The data center is where the cool stuff is—the pizza boxes, the blade servers, the NASes and the SANs. Some of its residents are more exotic—the Big Iron in all its massive forms, from Z-series to Superdome and all points in between.

For decades, data centers have been the beating hearts of many businesses—the fortified secret rooms where huge amounts of capital sit, busily transforming electricity into revenue. And they’re sometimes a place for IT to hide, too—it’s kind of a standing joke that whenever a user you don’t want to see is stalking around the IT floor, your best bet to avoid contact is just to badge into the data center and wait for them to go away. (But, uh, I never did that ever. I promise.)

But the last few years have seen a massive shift in the relationship between companies and their data—and the places where that data lives. Sure, it’s always convenient to own your own servers and storage, but why tie up all that capital when you don’t have to? Why not just go to the cloud buffet and pay for what you want to eat and nothing more?

Read 4 remaining paragraphs | Comments

Continue Reading

Uncategorized

Transforming the energy industry with AI

Published

on

For oil and gas companies, digital transformation is a priority—not only as a way to modernize the enterprise, but also to secure the entire energy ecosystem. With that lens, the urgency of applying artificial intelligence (AI) and machine learning capabilities for optimization and cybersecurity becomes clear, especially as threat actors increasingly target connected devices and operating systems, putting the oil and gas industry in collective danger. The year-over-year explosion in industry-specific attacks underscores the need for meaningful advancements and maturity in cybersecurity programs.

However, most companies don’t have the resources to implement sophisticated AI programs to stay secure and advance digital capabilities on their own. Irrespective of size, available budget, and in-house personnel, all energy companies must manage operations and security fundamentals to ensure they have visibility and monitoring across powerful digital tools to remain resilient and competitive. The achievement of that goal is much more likely in partnership with the right experts.

MIT Technology Review Insights, in association with Siemens Energy, spoke to more than a dozen information technology (IT) and cybersecurity executives at oil and gas companies worldwide to gain insight about how AI is affecting their digital transformation and cybersecurity strategies in oil and gas operating environments. Here are the key findings:

  • Oil and gas companies are under pressure to adapt to dramatic changes in the global business environment. The coronavirus pandemic dealt a stunning blow to the global economy in 2020, contributing to an extended trend of lower prices and heightening the value of increased efficiency to compensate for market pressures. Companies are now forced to operate in a business climate that necessitates remote working, with the added pressure to manage the environmental impact of operations growing ever stronger. These combined factors are pushing oil and gas companies to pivot to new, streamlined ways of working, making digital technology adoption critical.
  • As oil and gas companies digitalize, the risk of cyberattacks increases, as do opportunities for AI. Companies are adding digital technology for improved productivity, operational efficiency, and security. They’re collecting and analyzing data, connecting equipment to the internet of things, and tapping cutting-edge technologies to improve planning and increase profits, as well as to detect and mitigate threats. At the same time, the industry’s collective digital transformation is widening the surface for cybercriminals to attack. IT is under threat, as is operational technology (OT)—the computing and communications systems that manage and control equipment and industrial operations.
  • Cybersecurity must be at the core of every aspect of companies’ digital transformation strategies. The implementation of new technologies affects interdependent business and operational functions and underlying IT infrastructure. That reality calls for oil and gas companies to shift to a risk management mindset. This includes designing projects and systems within a cybersecurity risk framework that enforces companywide policies and controls. Most important, they now need to access and deploy state-of-the-art cybersecurity tools powered by AI and machine learning to stay ahead of attackers.
  • AI is optimizing and securing energy assets and IT networks for increased monitoring and visibility. Advancements in digital applications in industrial operating environments are helping improve efficiency and security, detecting machine-speed attacks amidst the complexity of the rapidly digitalizing operating environments.
  • Oil and gas companies look to external partners to guard against growing cyberthreats. Many companies have insufficient cybersecurity resources to meet their challenges head-on. “We are in a race against the speed of the attackers,” Repsol Chief Information Officer Javier García Quintela explains in the report. “We can’t provide all the cybersecurity capabilities we need from inside.” To move quickly and address their vulnerabilities, companies can find partners that can provide expertise and support as the threat environment expands.

Cybersecurity, AI, and digitalization

Energy sector organizations are presented with a major opportunity to deploy AI and build out a data strategy that optimizes production and uncovers new business models, as well as secure operational technology. Oil and gas companies are faced with unprecedented uncertainty—depressed oil and gas prices due to the coronavirus pandemic, a multiyear glut in the market, and the drive to go green—and many are making a rapid transition to digitalization as a matter of survival. From moving to the cloud to sharing algorithms, the oil and gas industry is showing there is robust opportunity for organizations to evolve with technological changes.

In the oil and gas industry, the digital revolution has enabled companies to connect physical energy assets with hardware control systems and software programs, which improves operational efficiency, reduces costs, and cuts emissions. This trend is due to the convergence of energy assets connected to OT systems, which manage, monitor, and control energy assets and critical infrastructure, and IT networks that companies use to optimize data across their corporate environments.

With billions of OT and IT data points captured from physical assets each day, oil and gas companies are now turning to built-for-purpose AI tools to provide visibility and monitoring across their industrial operating environments—both to make technologies and operations more efficient, and for protection against cyberattacks in an expanded threat landscape. Because energy companies’ business models rely on the convergence of OT and IT data, companies see AI as an important tool to gain visibility into their digital ecosystems and understand the context of their operating environments. Enterprises that build cyber-first digital deployments similarly have to accommodate emerging technologies, such as AI and machine learning, but spend less time on strategic realignment or change management.

Importantly, for oil and gas companies, AI, which may have once been reserved for specialized applications, is now optimizing everyday operations and providing critical cybersecurity defense for OT assets. Leo Simonovich, vice president and global head of industrial cyber and digital security at Siemens Energy, argues, “Oil and gas companies are becoming digital companies, and there shouldn’t be a trade-off between security and digitalization.” Therefore, Simonovich continues, “security needs to be part of the digital strategy, and security needs to scale with digitalization.”

To navigate today’s volatile business landscape, oil and gas companies need to simultaneously identify optimization opportunities and cybersecurity gaps in their digitalization strategies. That means building AI and cybersecurity into digital deployments from the ground up, not bolting them on afterward.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Continue Reading

Trending