FUTURE SHOCK World’s first humanoid robot factory set to open later this year where robots will be helping build other robots

ROBOTS have started to take over a humanoid robot factory where they will be assisting in the design of their own kind.

Agility Robotics has revealed plans to construct the world’s largest humanoid robot factory in Salem, Oregon.

Engineers behind the bipedal robot known as Digit say that they hope to produce more than 10,000 robots annuallyCredit: Agility Robotics
The robot can lift and lower objects, as well as gauge when a human or barrier is in its pathCredit: Agility Robotics

The tech startup aims to “enable humans to be more human” by utilizing robots that can take care of common human tasks.

Even Amazon has backed the future-forward company, which has secured over $180million in private funding since its start in 2015.

Engineers behind the bipedal robot known as Digit say that they hope to produce more than 10,000 robots annually.

Each Digit stands just under 6 feet and has two legs and two arms.

The bot can perceive when a human or barrier is in its path and can navigate around them, allowing it to work seamlessly alongside people.

Between unloading trailers and moving packages, the Digit bot has been designed for an array of jobs.

The company hopes that its robots will alleviate some of the work that people no longer want to do.

Damion Shelton, co-founder and CEO of Agility Robotics told CNBC that he hopes the robots will help meet the rising demand for manufacturing labor.

The new factory will cover over 70,000 feet, which means there will be plenty of room for the new bots to get to work.

COO Aindrea Campbell told CNBC that plans for the ground-breaking facility are well underway.

“It’s a really big endeavor, not something where you flick a switch and suddenly turn it on,” Campbell said. “There’s kind of a ramp-up process.”

“The inflection point today is that we’re opening the factory, installing the production lines, and starting to grow capacity and scale with something that’s never been done before,” he added.

Agility noted that the new factory, called RoboFab, could employ up to 500 people in addition to its bot force.

The factory is set to open later this year and the Digit bot will be generally available to commercial customers in 2025.

Source: https://www.the-sun.com/tech/9214972/worlds-first-humanoid-robot-factory/

US tech policy must keep pace with AI innovation

Image Credits: Ole_CNX (opens in a new window)/ Getty Images

As innovation in artificial intelligence (AI) outpaces news cycles and grabs public attention, a framework for its responsible and ethical development and use has become increasingly critical to ensuring that this unprecedented technology wave reaches its full potential as a positive contribution to economic and societal progress.

The European Union has already been working to enact laws around responsible AI; I shared my thoughts on those initiatives nearly two years ago. Then, the AI Act, as it is known, was “an objective and measured approach to innovation and societal considerations.” Today, leaders of technology businesses and the United States government are coming together to map out a unified vision for responsible AI.

The power of generative AI
OpenAI’s release of ChatGPT captured the imagination of technology innovators, business leaders and the public last year, and consumer interest and understanding of the capabilities of generative AI exploded. However, with artificial intelligence becoming mainstream, including as a political issue, and humans’ propensity to experiment and test systems, the ability for misinformation, impact on privacy and the risk to cybersecurity and fraudulent behavior run the risk of quickly becoming an afterthought.

In an early effort to address these potential challenges and ensure responsible AI innovation that protects Americans’ rights and safety, the White House has announced new actions to promote responsible AI.

In a fact sheet released by the White House last week, the Biden-Harris administration outlined three actions to “promote responsible American innovation in artificial intelligence (AI) and protect people’s rights and safety.” These include:

New investments to power responsible American AI R&D.
Public assessments of existing generative AI systems.
Policies to ensure the U.S. Government is leading by example in mitigating AI risks and harnessing AI opportunities.
New investments
Regarding new investments, The National Science Foundation’s $140 million in funding to launch seven new National AI Research Institutes pales in comparison to what has been raised by private companies.

While directionally correct, the U.S. Government’s investment in AI broadly is microscopic compared to other countries’ government investments, namely China, which started investments in 2017. An immediate opportunity exists to amplify the impact of investment through academic partnerships for workforce development and research. The government should fund AI centers alongside academic and corporate institutions already at the forefront of AI research and development, driving innovation and creating new opportunities for businesses with the power of AI.

The collaborations between AI centers and top academic institutions, such as MIT’s Schwarzman College and Northeastern’s Institute for Experiential AI, help to bridge the gap between theory and practical application by bringing together experts from academic, industry and government to collaborate on cutting-edge research and development projects that have real-world applications. By partnering with major enterprises, these centers can help companies better integrate AI into their operations, improving efficiency, cost savings and better consumer outcomes.

Additionally, these centers help to educate the next generation of AI experts by providing students with access to state-of-the-art technology, hands-on experience with real-world projects and mentorship from industry leaders. By taking a proactive and collaborative approach to AI, the U.S. government can help shape a future in which AI enhances, rather than replaces, human work. As a result, all members of society can benefit from the opportunities created by this powerful technology.

Public assessments
Model assessment is critical to ensuring that AI models are accurate, reliable and bias-free, essential for successful deployment in real-world applications. For example, imagine an urban planning use case in which generative AI is trained on redlined cities with historically underrepresented poor populations. Unfortunately, it is just going to lead to more of the same. The same goes for bias in lending, as more financial institutions are using AI algorithms to make lending decisions.

If these algorithms are trained on data discriminatory against certain demographic groups, they may unfairly deny loans to those groups, leading to economic and social disparities. Although these are just a few examples of bias in AI, this must stay top of mind regardless of how quickly new AI technologies and techniques are being developed and deployed.

To combat bias in AI, the administration has announced a new opportunity for model assessment at the DEFCON 31 AI Village, a forum for researchers, practitioners and enthusiasts to come together and explore the latest advances in artificial intelligence and machine learning. The model assessment is a collaborative initiative with some of the key players in the space, including Anthropic, Google, Hugging Face, Microsoft, Nvidia, OpenAI and Stability AI, leveraging a platform offered by Scale AI.

In addition, it will measure how the models align with the principles and practices outlined in the Biden-Harris administration’s Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework. This is a positive development whereby the administration is directly engaging with enterprises and capitalizing on the expertise of technical leaders in the space, which have become corporate AI labs.

Source: https://techcrunch.com/2023/05/14/u-s-tech-policy-must-keep-pace-with-ai-innovation/

I, ROBOT I’m an AI expert – everyone on earth will DIE unless we stop rapidly developing bots & we should halt development NOW

A TOP AI expert has issued a stark warning over the potential for world extinction that super-smart AI technology could bring.

Eliezer Yudkowsky is a leading AI researcher and he claims that “everyone on the earth will die” unless we shut down the development of superhuman intelligence systems.

Yudkowsky believes superhuman AI will be the death of us all unless we ‘shut it all down’ (stock image)Credit: Getty

The 43-year-old is a co-founder of the Machine Intelligence Research Institute and (MIRI) and claims to know exactly how “horrifically dangerous this technology” is.

He fears that when it comes down to humans versus smarter-than-human intelligence – the result is a “total loss”, he wrote in TIME.

As a metaphor, he says, this would be like a “11th century trying to fight the 21st century”.

In short, humans would lose dramatically.

On March 29, leading experts from OpenAI submitted an open letter called “Pause Giant AI Experiments” that demanded an immediate six month ban in the training of powerful AI systems for six months.

It has been signed by the likes of Apple’s co-founder Steve Wozniak and Elon Musk.

However, the American theorist says he declined to sign this petition as it is “asking for too little to solve it”.

The threat is so great that he argues that extinction by AI should be “considered a priority above preventing a full nuclear exchange”.

He warns that the most likely result of robot science is that we will create “AI that does not do what we want, and does not care for us nor for sentient life in general.”

We are not ready, Yudkowsky admits, to teach AI how to be caring as we “do not currently know how”.

Instead, the stark reality is that in the mind or a robot “you are made of atoms that it can use for something else”.

“If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.”

Yudkowsky is keen to point out that presently “we have no idea how to determine whether AI systems are aware of themselves”.

What this means is that scientists could accidentally create “digital minds which are truly conscious” and then it slips into all kinds of moral dilemmas that conscious beings should have rights and not be owned.

Our ignorance, he implores, will be our downfall.

As researchers don’t know whether they are creating self-aware AI then, he says, “you have no idea what you are doing and that is dangerous and you should stop”.

Yudkowsky claims that it could take us decades to solve the issue of safety in superhuman intelligence – this safety being “not killing literally everyone” – and in that time we could all be dead.

The expert’s central point is this: “We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan.

Source: https://www.thesun.co.uk/tech/21909547/ai-expert-stop-bots-halt-development-now/?utm_campaign=native_share&utm_source=sharebar_native&utm_medium=sharebar_native

Exit mobile version