Indestructible Terminator-style killer robots move one step closer to reality as scientists discover self-healing metals

The idea of indestructible killer robots may sound like something straight out of the Terminator movie.

But they could soon become a reality, as scientists have just witnessed metal healing itself for the first time, without any human intervention.

A US-based study has overturned everything we thought we knew about metals by revealing that cracks from wear and tear can actually mend themselves under certain conditions.

It’s a discovery that has the potential to revolutionise engineering, with the prospect of self-healing engines, planes and even robots now on the horizon.

‘This was absolutely stunning to watch first-hand,’ said Brad Boyce, a scientist at Sandia National Laboratories who led the study with Texas A&M University.

The idea of indestructible killer robots may sound like something taken straight out of the Terminator movie
Scientists were 40minutes into the experiment when the damage reversed as a ‘t-junction’ crack fused back together as if it were never there in the first place

‘What we have confirmed is that metals have their own intrinsic, natural ability to heal themselves, at least in the case of fatigue damage at the nanoscale.’

Metals that are currently used to build vital infrastructure such as bridges and planes undergo a lot of repeated stress and motion which causes microscopic cracks to form over time.

While this fatigue damage usually causes machines to break, Mr Boyce and his team witnessed the nano-sized fracture shrink by 18nm.

This was an entirely unexpected discovery as scientists only intended to evaluate how cracks would spread through a 40-nm-thick piece of platinum when pressure was applied.

They were 40 minutes into the experiment when the damage reversed, as a ‘t-junction’ crack fused back together as if it were never there in the first place.

Then, as more pressure was applied, the crack regrew in a different direction, as amazed scientists watched through a microscope.

‘From solder joints in our electronic devices to our vehicle’s engines to the bridges that we drive over, these structures often fail unpredictably due to cyclic loading that leads to crack initiation and eventual fracture,’ Mr Boyce continued.

‘When they do fail, we have to contend with replacement costs, lost time and, in some cases, even injuries or loss of life. The economic impact of these failures is measured in hundreds of billions of dollars every year for the U.S.

Source: https://www.dailymail.co.uk/sciencetech/article-12318733/Indestructible-Terminator-style-killer-robots-one-step-closer-reality-scientists-discover-self-healing-metals.html

AI experts sound alarm on technology going into 2024 election: ‘We’re not prepared for this’

AI-generated political disinformation already has gone viral online ahead of the 2024 election

AI experts and tech-inclined political scientists are sounding the alarm on the unregulated use of AI tools going into an election season.

Generative AI can not only rapidly produce targeted campaign emails, texts or videos, it also could be used to mislead voters, impersonate candidates and undermine elections on a scale and at a speed not yet seen.

A booth is ready for a voter, Feb. 24, 2020, at City Hall in Cambridge, Mass., on the first morning of early voting in the state. (AP Photo/Elise Amendola)

“We’re not prepared for this,” warned A.J. Nash, vice president of intelligence at the cybersecurity firm ZeroFox. “To me, the big leap forward is the audio and video capabilities that have emerged. When you can do that on a large scale, and distribute it on social platforms, well, it’s going to have a major impact.”

Among the many capabilities of AI, here are a few that will have significance ramifications with elections and voting: automated robocall messages, in a candidate’s voice, instructing voters to cast ballots on the wrong date; audio recordings of a candidate supposedly confessing to a crime or expressing racist views; video footage showing someone giving a speech or interview they never gave.

Fake images designed to look like local news reports, falsely claiming a candidate dropped out of the race.

“We’re not prepared for this,” warned A.J. Nash, vice president of intelligence at the cybersecurity firm ZeroFox. “To me, the big leap forward is the audio and video capabilities that have emerged. When you can do that on a large scale, and distribute it on social platforms, well, it’s going to have a major impact.”

Among the many capabilities of AI, here are a few that will have significance ramifications with elections and voting: automated robocall messages, in a candidate’s voice, instructing voters to cast ballots on the wrong date; audio recordings of a candidate supposedly confessing to a crime or expressing racist views; video footage showing someone giving a speech or interview they never gave.

Fake images designed to look like local news reports, falsely claiming a candidate dropped out of the race.

Source: https://www.foxnews.com/politics/ai-experts-sound-alarm-technology-2024-election-were-not-prepared-for-this

US tech policy must keep pace with AI innovation

Image Credits: Ole_CNX (opens in a new window)/ Getty Images

As innovation in artificial intelligence (AI) outpaces news cycles and grabs public attention, a framework for its responsible and ethical development and use has become increasingly critical to ensuring that this unprecedented technology wave reaches its full potential as a positive contribution to economic and societal progress.

The European Union has already been working to enact laws around responsible AI; I shared my thoughts on those initiatives nearly two years ago. Then, the AI Act, as it is known, was “an objective and measured approach to innovation and societal considerations.” Today, leaders of technology businesses and the United States government are coming together to map out a unified vision for responsible AI.

The power of generative AI
OpenAI’s release of ChatGPT captured the imagination of technology innovators, business leaders and the public last year, and consumer interest and understanding of the capabilities of generative AI exploded. However, with artificial intelligence becoming mainstream, including as a political issue, and humans’ propensity to experiment and test systems, the ability for misinformation, impact on privacy and the risk to cybersecurity and fraudulent behavior run the risk of quickly becoming an afterthought.

In an early effort to address these potential challenges and ensure responsible AI innovation that protects Americans’ rights and safety, the White House has announced new actions to promote responsible AI.

In a fact sheet released by the White House last week, the Biden-Harris administration outlined three actions to “promote responsible American innovation in artificial intelligence (AI) and protect people’s rights and safety.” These include:

New investments to power responsible American AI R&D.
Public assessments of existing generative AI systems.
Policies to ensure the U.S. Government is leading by example in mitigating AI risks and harnessing AI opportunities.
New investments
Regarding new investments, The National Science Foundation’s $140 million in funding to launch seven new National AI Research Institutes pales in comparison to what has been raised by private companies.

While directionally correct, the U.S. Government’s investment in AI broadly is microscopic compared to other countries’ government investments, namely China, which started investments in 2017. An immediate opportunity exists to amplify the impact of investment through academic partnerships for workforce development and research. The government should fund AI centers alongside academic and corporate institutions already at the forefront of AI research and development, driving innovation and creating new opportunities for businesses with the power of AI.

The collaborations between AI centers and top academic institutions, such as MIT’s Schwarzman College and Northeastern’s Institute for Experiential AI, help to bridge the gap between theory and practical application by bringing together experts from academic, industry and government to collaborate on cutting-edge research and development projects that have real-world applications. By partnering with major enterprises, these centers can help companies better integrate AI into their operations, improving efficiency, cost savings and better consumer outcomes.

Additionally, these centers help to educate the next generation of AI experts by providing students with access to state-of-the-art technology, hands-on experience with real-world projects and mentorship from industry leaders. By taking a proactive and collaborative approach to AI, the U.S. government can help shape a future in which AI enhances, rather than replaces, human work. As a result, all members of society can benefit from the opportunities created by this powerful technology.

Public assessments
Model assessment is critical to ensuring that AI models are accurate, reliable and bias-free, essential for successful deployment in real-world applications. For example, imagine an urban planning use case in which generative AI is trained on redlined cities with historically underrepresented poor populations. Unfortunately, it is just going to lead to more of the same. The same goes for bias in lending, as more financial institutions are using AI algorithms to make lending decisions.

If these algorithms are trained on data discriminatory against certain demographic groups, they may unfairly deny loans to those groups, leading to economic and social disparities. Although these are just a few examples of bias in AI, this must stay top of mind regardless of how quickly new AI technologies and techniques are being developed and deployed.

To combat bias in AI, the administration has announced a new opportunity for model assessment at the DEFCON 31 AI Village, a forum for researchers, practitioners and enthusiasts to come together and explore the latest advances in artificial intelligence and machine learning. The model assessment is a collaborative initiative with some of the key players in the space, including Anthropic, Google, Hugging Face, Microsoft, Nvidia, OpenAI and Stability AI, leveraging a platform offered by Scale AI.

In addition, it will measure how the models align with the principles and practices outlined in the Biden-Harris administration’s Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework. This is a positive development whereby the administration is directly engaging with enterprises and capitalizing on the expertise of technical leaders in the space, which have become corporate AI labs.

Source: https://techcrunch.com/2023/05/14/u-s-tech-policy-must-keep-pace-with-ai-innovation/

Exit mobile version