For years, Microsoft’s list of competitors has included familiar names like Amazon, Apple, Google, and Meta.
The lines are blurring between collaboration and competition in the world of AI. Microsoft, in its latest annual report, has officially designated OpenAI, the artificial intelligence startup it has heavily invested in, as a competitor. This unexpected move highlights the increasingly complex relationship between the two tech giants.
For years, Microsoft’s list of competitors has included familiar names like Amazon, Apple, Google, and Meta. But the addition of OpenAI signals a shift, acknowledging that despite their close partnership, the two companies are increasingly encroaching on each other’s territory.
Microsoft has invested a reported $13 billion in OpenAI and serves as its exclusive cloud provider. OpenAI’s powerful AI models are integrated into various Microsoft products for both commercial and consumer use.
However, both companies are now offering competing products and services in the AI space. While some companies opt to work directly with OpenAI, others access its models through Microsoft’s Azure OpenAI Service. Microsoft has also developed its own AI chatbot, Copilot, which is integrated into Bing search and Windows operating systems.
An OpenAI spokesperson downplayed the significance of the listing, stating that the competitive aspect was always understood within the partnership. An OpenAI spokesperson told CNBC that nothing about the relationship between the two companies has changed and that their partnership was established with the understanding that they would compete. “Microsoft remains a good partner to OpenAI,” the spokesperson said.
A Tumultuous Relationship
Despite the reassurances, the Microsoft-OpenAI partnership has faced its share of turbulence this year. The abrupt ousting and subsequent reinstatement of OpenAI CEO Sam Altman in November reportedly caught Microsoft CEO Satya Nadella off guard. Microsoft recently relinquished its non-voting board seat at OpenAI.
Furthermore, Nadella’s recent appointment of Mustafa Suleyman, a co-founder of Google’s DeepMind AI research lab, to lead Microsoft’s new AI unit has fuelled speculation about internal competition and the company’s long-term AI strategy.
Bangladesh has imposed a ban on several popular social media platforms, including Instagram, TikTok, WhatsApp, and YouTube, as per reports. This move, announced on Friday, August 2, has effectively restricted access to these widely-used social networks across the nation.
WhatsApp, Instagram, other platforms banned by Bangladesh
Global Eyes News first reported the ban through its official X account, confirming that from Friday onward, access to these social media networks would be limited throughout Bangladesh.
This decision comes shortly after a similar action by Turkey, which also announced a ban on Instagram earlier the same day.
Previous Meta platforms restrictions
According to reports, this latest ban follows a previous suspension of Meta’s platforms Instagram and Facebook in July. The earlier shutdown was in response to widespread unrest witnessed in the country over quota reforms. Sources indicate that access to Meta’s platforms was cut off via mobile networks around 12:15 PM on August 2. Unlike the previous comprehensive shutdown, the current restrictions are reportedly targeting mobile data connections.
Internet speed and VPN usage
Reports suggest that the country’s internet speed had returned to normal levels on August 1. However, with millions of mobile network users affected by the Facebook restriction, there is an expected surge in the use of Virtual Private Networks (VPNs), which could potentially slow down overall internet speeds.
Although our universe may seem stable, having existed for a whopping 13.7 billion years, several experiments suggest that it is at risk – walking on the edge of a very dangerous cliff. And it’s all down to the instability of a single fundamental particle: the Higgs boson.
In new research by me and my colleagues, just accepted for publication in Physical Letters B, we show that some models of the early universe, those which involve objects called light primordial black holes, are unlikely to be right because they would have triggered the Higgs boson to end the cosmos by now.
The Higgs boson is responsible for the mass and interactions of all the particles we know of. That’s because particle masses are a consequence of elementary particles interacting with a field, dubbed the Higgs field. Because the Higgs boson exists, we know that the field exists.
You can think of this field as a perfectly still water bath that we soak in. It has identical properties across the entire universe. This means we observe the same masses and interactions throughout the cosmos. This uniformity has allowed us to observe and describe the same physics over several millennia (astronomers typically look backwards in time).
But the Higgs field isn’t likely to be in the lowest possible energy state it could be in. That means it could theoretically change its state, dropping to a lower energy state in a certain location. If that happened, however, it would alter the laws of physics dramatically.
Such a change would represent what physicists call a phase transition. This is what happens when water turns into vapour, forming bubbles in the process. A phase transition in the Higgs field would similarly create low-energy bubbles of space with completely different physics in them.
In such a bubble, the mass of electrons would suddenly change, and so would its interactions with other particles. Protons and neutrons – which make up the atomic nucleus and are made of quarks – would suddenly dislocate. Essentially, anybody experiencing such a change would likely no longer be able to report it.
Constant risk
Recent measurements of particle masses from the Large Hadron Collider (LHC) at Cern suggest that such an event might be possible. But don’t panic; this may only occur in a few thousand billion billion years after we retire. For this reason, in the corridors of particle physics departments, it is usually said that the universe is not unstable but rather “meta-stable”, because the world’s end will not happen anytime soon.
To form a bubble, the Higgs field needs a good reason. Due to quantum mechanics, the theory which governs the microcosmos of atoms and particles, the energy of the Higgs is always fluctuating. And it is statistically possible (although unlikely, which is why it takes so much time) that the Higgs forms a bubble from time to time.
However, the story is different in the presence of external energy sources like strong gravitational fields or hot plasma (a form of matter made up of charged particles): the field can borrow this energy to form bubbles more easily.
Therefore, although there is no reason to expect that the Higgs field forms numerous bubbles today, a big question in the context of cosmology is whether the extreme environments shortly after the Big Bang could have triggered such bubbling.
However, when the universe was very hot, although energy was available to help form Higgs bubbles, thermal effects also stabilised the Higgs by modifying its quantum properties. Therefore, this heat could not trigger the end of the universe, which is probably why we are still here.
Primordial black holes
In our new research, we showed there is one source of heat, however, that would constantly cause such bubbling (without the stabilising thermal effects seen in the early days after the Big Bang). That’s primordial black holes, a type of black hole which emerged in the early universe from the collapse of overly dense regions of spacetime. Unlike normal black holes, which form when stars collapse, primordial ones could be tiny – as light as a gram.
The existence of such light black holes is a prediction of many theoretical models that describe the evolution of the cosmos shortly after the Big Bang. This includes some models of inflation, suggesting the universe blew up hugely in size after the Big Bang.
However, proving this existence comes with a big caveat: Stephen Hawking demonstrated in the 1970s that, because of quantum mechanics, black holes evaporate slowly by emitting radiation through their event horizon (a point at which not even light can escape).
Hawking showed that black holes behave like heat sources in the universe, with a temperature inversely proportional to their mass. This means that light black holes are much hotter and evaporate more quickly than massive ones. In particular, if primordial black holes lighter than a few thousands billion grams formed in the early universe (10 billion times smaller than the Moon’s mass), as many models suggest, they would have evaporated by now.
In the presence of the Higgs field, such objects would behave like impurities in a fizzy drink – helping the liquid form gas bubbles by contributing to its energy via the effect of gravity (due to the mass of the black hole) and the ambient temperature (due to its Hawking radiation).
A cyber attack ransom ware hit on financial technology service provider of digital Payment Systems forced shutdown of nearly 300 cooperative and regional rural banks across India.
A cyber attack ransom ware hit on financial technology service provider of digital Payment Systems forced shutdown of nearly 300 cooperative and regional rural banks across India.
The mobile and online financial systems of Unified Payment Interface (UPI), electronic fund transfer system Immediate Payment Service (IMPS) and retail payments of 300 small banks were shut down as precautionary measures after the ransom ware attack on tech provider C-Edge Technologies on Wednesday.
The cyber attack on C-Edge Technologies forced over 30O small banks using the financial technology services to temporarily stop payment systems.
The connectivity with the payment systems was restored late Thursday by NCPI after security review conducted by an independent forensic auditing firm, confirmed that the impacted systems were isolated to contain potential spread of the ransomware.
The National Payments Corporation of India (NPCI) had issued notices to UPI and IMPA payment services on Wednesday to isolate C-Edge Technologies from accessing NPCI retail payment systems to limit the impact of the cyber attack.
“The impact was limited to C-Edge systems hosted in their data centre and not on any of the co-operative banks or regional banks own infrastructure. The services of co-operative banks and regional rural banks, which were dependent on C-Edge have now been restored,” said NCPI statement.
The joint venture of Tata Consultancy Services (TCS) and State Bank of India (SBI), C-Edge Technologies provides banking and finance software solutions and mostly caters to cooperative and regional rural banks. It functions as a technology, infrastructure and service provider for financial institutions in India and abroad.
In iOS 18’s latest developer preview, Siri gets a glow-up. Like, the whole phone actually glows around the edges when you invoke Siri.
A splash screen reintroduces you to the virtual assistant once you enable Apple Intelligence, an early version of which is now available on the iPhone 15 Pro and Pro Max in a developer beta. You’ll know Siri is listening when the edges of the screen glow, making it pretty obvious that something different is going on.
The big Siri AI update is still months away. This version comes with meaningful improvements to language understanding, but future updates will add features like awareness of what’s on your screen and the ability to take action on your behalf. Meanwhile, the rest of the Apple Intelligence feature set previewed in this update feels like a party waiting for the guest of honor.
That said, Siri’s improvements in this update are useful. Tapping the bottom of the screen twice will bring up a new way to interact with the assistant: through text. It’s also much better at parsing natural language, waiting more patiently through hesitations and “um”s as I stumble through questions. It also understands when I’m asking a follow-up question.
Outside of Siri, it’s kind of an Easter egg hunt finding bits of Apple Intelligence sprinkled throughout the OS. They’re in the mail app, with a summarize button at the top of each email now. And anywhere you can type and highlight text, you’ll find a new option called “writing tools” with AI proofreading, writing suggestions, and summaries.
“Help me write something” is pretty standard fare for generative AI these days, and Apple Intelligence does it as well as anyone else. You can have it make your text more friendly, professional, or concise. You can also create summaries of text or synthesize it into bulleted lists of key points or a table.
I’m finding these tools most useful in the Notes app, where you can now add voice recordings. In iOS 18, voice recordings finally come with automatic transcriptions, which is not an Apple Intelligence feature since it also works on my iPhone 13 Mini. But Apple Intelligence will let you turn a recording transcript into a summary or a checklist. This is helpful if you want to just free-associate while recording a memo and list a bunch of things you need to pack for an upcoming trip; Apple Intelligence turns it into a list that actually makes sense.
These writing tools are tucked out of the way, and if you weren’t looking for them, you might miss them entirely. The more obvious new AI features are in the mail app. Apple Intelligence surfaces what it deems to be important emails in a card that sits above the rest of your inbox marked as priority. Below that, emails show a brief summary in place of the first line or two of text that you’d normally see.
There’s something charming about AI’s sincere attempt to summarize promotional emails, trying to helpfully pull out bits of detail like “Backpacks and lunch boxes ship FREE” and “Organic white nectarines are sweet and juicy, in season now.” But the descriptions in my inbox were accurate — helpful in a few instances and harmless at worst. And the emails it gave priority status to were genuinely important, which is promising.
The search tool in the Photos app now uses AI to understand more complicated requests. You can ask for pictures of a particular person wearing glasses or all the food you ate in Iceland, all in natural language.
Meta Platforms (META.O), opens new tab has agreed to pay $1.4 billion to Texas to resolve the state’s lawsuit accusing the Facebook parent of illegally using facial-recognition technology to collect biometric data of millions of Texans without their consent.
The terms of the settlement, disclosed on Tuesday, mark the largest accord ever by any single state, according to the lawyers for Texas, whose legal team included the plaintiffs firm Keller Postman.
The lawsuit, filed in 2022, was the first major case to be brought under Texas’ 2009 biometric privacy law, according to law firms tracking the litigation. A provision of the law provides damages of up to $25,000 per violation.
Texas accused Facebook of capturing biometric information “billions of times” from photos and videos that users uploaded to the social media platform as part of a free, discontinued feature called “Tag Suggestions.”
A spokesperson for Meta said the company is pleased to resolve the matter and looks forward to “exploring future opportunities to deepen our business investments in Texas, including potentially developing data centers.”
It has continued to deny any wrongdoing.
Texas Attorney General Ken Paxton in a statement said the settlement marks the state’s “commitment to standing up to the world’s biggest technology companies and holding them accountable for breaking the law and violating Texans’ privacy rights.”
Texas and Meta said they reached an accord in May, weeks before the start of a trial in state court was scheduled to begin.
Meta separately agreed to pay $650 million in 2020 to settle a biometric privacy class action that was brought under an Illinois privacy law that is considered one of the nation’s most stringent. The company also denied wrongdoing.
Alphabet (GOOGL.O), opens new tab’s Google separately is fighting a lawsuit by Texas accusing the company of violating the state’s biometric law.
Apple (AAPL.O), opens new tab relied on chips designed by Google rather than industry leader Nvidia to build two key components of its artificial intelligence software infrastructure for its forthcoming suite of AI tools and features, an Apple research paper published on Monday showed.
Apple’s decision to rely on Google’s (GOOGL.O), opens new tab cloud infrastructure is notable because Nvidia (NVDA.O), opens new tab produces the most sought-after AI processors.
Including the chips made by Google, Amazon.com (AMZN.O), opens new tab and other cloud computing companies, Nvidia commands roughly 80% of the market.
In the research paper, Apple did not explicitly say that it used no Nvidia chips, but its description of the hardware and software infrastructure of its AI tools and features lacked any mention of Nvidia hardware.
Apple did not comment on Monday.
The iPhone maker said that to train its AI models, it used two flavors of Google’s tensor processing unit (TPU) that are organized in large clusters of chips.
To build the AI model that will operate on iPhones and other devices, Apple used 2,048 of the TPUv5p chips. For its server AI model, Apple deployed 8,192 TPUv4 processors.
Nvidia does not design TPUs but rather focuses its efforts on so-called graphics processing units (GPUs) that are widely used for AI efforts.
Unlike Nvidia, which sells its chips and systems as standalone products, Google sells access to TPUs through its Google Cloud Platform. Customers interested in buying access must build software through Google’s cloud platform in order to use the chips.
In the race towards a quantum future, researchers at the University of Bath have made a significant development that could revolutionize how we transmit data in the quantum age. Their innovation? A new generation of specialty optical fibers designed specifically to meet the unique challenges of quantum communication.
As we stand on the brink of the quantum computing era, promising unparalleled computational power and unbreakable encryption, our current data transmission infrastructure faces a critical limitation. The conventional optical fibers that form the backbone of today’s global internet are simply not up to the task of quantum communication. But fear not – a solution is on the horizon, and it’s thinner than a human hair.
“The conventional optical fibers that are the workhorse of our telecommunications networks of today transmit light at wavelengths that are entirely governed by the losses of silica glass,” says study co-author Dr. Kristina Rusimova, from the Department of Physics at Bath, in a statement. “However, these wavelengths are not compatible with the operational wavelengths of the single-photon sources, qubits, and active optical components, that are required for light-based quantum technologies.”
Enter the microstructured optical fiber. Unlike traditional optical fibers with their solid glass cores, these new fibers feature a complex pattern of air pockets running along their entire length. This seemingly simple change opens up a world of possibilities for controlling and manipulating light in ways crucial for quantum technologies.
One of the most exciting applications of these fibers is in creating the building blocks of a quantum internet. By carefully designing the structure of these fibers, researchers can generate pairs of entangled photons – particles of light that are inextricably linked, no matter how far apart they are. This quantum entanglement is the secret sauce that makes many quantum technologies possible.
“A quantum internet is an essential ingredient in delivering on the vast promises of such emerging quantum technology. Much like the existing internet, a quantum internet will rely on optical fibers to deliver information from node to node,” says Dr. Cameron McGarry, first author of the paper. “These optical fibers are likely to be very different to those that are used currently and will require different supporting technology to be useful.”
The new computers come with a Qualcomm Snapdragon X Elite processor and a dedicated Qualcomm Hexagon Neural Processing Unit (NPU), capable of performing 45 trillion operations per second (TOPS) to run language models and generative AI locally on the device.
Hewlett Packard (HP) on Monday (July 29) launched the new OmninBook X and EliteBook Ultra, the company’s first Copilot+ series computers in India.
The new computers come with a Qualcomm Snapdragon X Elite processor and a dedicated Qualcomm Hexagon Neural Processing Unit (NPU), capable of performing 45 trillion operations per second (TOPS) to run language models and generative AI locally on the device.
They also come with a dedicated Copilot button on the keyboard. It triggers ChatGPT-power Copilot assistant to help users with tasks such as developing a presentation and editing to make fun family videos on the computer.
Other key aspects of the new HP laptops include HP AI Companion, Copilot and Poly Camera Pro.
With HP AI companion, users can upload PDF files or any documents to the computer and ask the AI assistant to analyse and offer a summary of the document.
The Poly Camera Pro feature offers better user interface for video conferencing. It utilises the NPU to power photography enhancements like Spotlight, Background Blur, Replace, and Auto Framing to ensure the presenter is at the centre and visible to the viewers.
“We are thrilled to unveil our first fully loaded AI PCs in India with the HP EliteBook Ultra and HP OmniBook X. These AI PCs are designed to create more personalized and meaningful user experiences, revolutionizing the way we interact with technology. By integrating advanced AI capabilities, we are setting a new standard in the industry, making technology smarter, more intuitive, and more responsive to individual needs,” said Vineet Gehani, Senior Director –Personal Systems, HP India.
The new HP EliteBook Ultra is said to be a tailor-made PC for corporate users. It comes with a 14-inch 2.2K display and features a full-size, backlit keyboard with HP Image-pad and an Image sensor-based click pad with multi-touch gesture support.
Inside, it supports up to 32 GB LPDDR5X-8533 RAM, up to 1TB storage, a 5MP IR web camera and a 3-cell, 59 Wh Li-ion polymer battery. It supports a smart 65 W USB Type-C slim adapter. It can charge from zero to 50 per cent in just 30 minutes. It can deliver 26 hours of battery life.
Hi, friends! Welcome to Installer No. 47, your guide to the best and Verge-iest stuff in the world. (If you’re new here, welcome, so psyched you found us, and also you can read all the old editions at the Installer homepage.)
This week, I’ve been reading about Skibidi Toilet and the future of mall brands and the legacy of Bell Labs, watching Dirty Pop and catching up on Cobra Kai, downloading every single podcast episode mentioned in this excellent Reddit thread, writing stuff down with Napkin, and trying desperately to figure out what I forgot to pack for vacation. I’ve also been trying new blueberry muffin recipes all week — thanks to everyone who sent me one!
Speaking of which: As I mentioned last week, Installer is taking a summer break. I’m going to go sit outside and stare at trees for a couple of weeks. (If you have good fun books I should read, by the way, please send them my way.) I’ll be back here August 17th with a big catch-up Installer, but I hope you have a great couple of weeks, and keep telling me about everything you’re into!
Before I go, I also have for you a new way to use Apple Maps, an interesting interview with Mark Zuckerberg, the best way to watch the Olympics, some of the internet’s best and silliest websites, and much more. Let’s do it.
The Drop
Peacock’s Olympics Multiview. Peacock is doing ~ the most ~ for the Olympics this year. Personalized highlights! AI Al Michaels! The Gold Zone! But I’ll be spending the next two weeks locked to the Multiview. Four events at a time, and I get to pick which one gets the audio? That’s the future of TV right there.
The Asus ROG Ally X. A Windows gaming handheld that is fast, comfortable, and quiet? That’s the dream right there. Except Windows still stinks on the tiny screen, and $800 is a lot for this thing. But still! We’re making progress!
Apple Maps for web. Apple’s new beta Maps tool is a stark, simple, lovely contrast to the cluttered mess of Google Maps. It’ll be interesting to see how much Apple tries to do here — Maps is great for navigation but rough for place discovery, but maybe this is a sign Apple wants to fix that.
Capacities. I’ve been messing with this superpowerful note-taking app for a while, and I really like the way Capacities organizes things. Now there’s a mobile app, too, which makes it much easier to get stuff into the system. It’s definitely a power-user tool, but I’m liking it a lot.
“Inside Mark Zuckerberg’s AI Era.” A long, unusually thoughtful interview with Mark Zuckerberg, in which Zuck has a very funny tan but also some really interesting thoughts on AI, AR, and how we think about the real world and the internet going forward. I was surprised by how much I enjoyed watching this.
Llama 3.1. The occasion for that Zuckerberg interview was the launch of Meta’s new AI model, which is apparently better and faster in the way that every new model now is the best and fastest. But the combination of the open-source approach here, and Meta’s shockingly popular Meta AI bot, means Llama is legit one to watch.
The Elgato Stream Deck XLR Dock. If you use an external mic for video calls, streaming, podcasting, whatever, this dock / Stream Deck combo might be the best simple USB setup I’ve ever seen. I bought one immediately.
Deadpool & Wolverine. Right now, it looks like Twisters might be the movie of the summer. I’m a little nervous about this one, which has been so hyped and overexposed, but I still have high hopes for two of my favorite Marvel characters.
The US space agency maintains that Sunita Williams and the eight other astronauts on board the International Space Station are safe and in ‘good spirits’
Fifty days and counting, Indian-origin astronaut Sunita Williams remains in limbo, uncertain of when and how she may return to Earth. However, the US space agency maintains that she and the eight other astronauts on board the International Space Station are safe and in ‘good spirits’.
An Indian space expert, speaking with a touch of humour, likened her situation to being in a state of ‘Trishanku’-a state where one is indeterminably but willingly stuck between a rock and a hard place.
Today, NASA provided further updates, indicating that they are closer to identifying the root cause of the Boeing Starliner’s malfunctioning systems, such as the failed thrusters and a series of helium leaks during its maiden test flight. However, there was no clarity on when, or if, Astronaut Sunita Williams and her crewmate Butch Wilmore will return, or if they will do so aboard the same Boeing Starliner spacecraft.
According to Boeing, the Starliner can remain docked with the space station for a maximum of ninety days, after which the batteries on board the spacecraft may deplete. As a result, US space technologists have approximately forty days left to determine whether Sunita Williams and Butch Wilmore will return to Earth on the impaired Boeing Starliner, or if they will use SpaceX’s Crew Dragon or the Russian Soyuz spacecraft. Both these standby vehicles are already docked at the space station, so neither Sunita Williams nor the other eight astronauts are truly stranded in space.
NASA Commercial Crew Program Manager Steve Stich noted that the crew is in good spirits and making the most of their time on the station as part of Expedition 71, given that both Sunita and Butch have previously undertaken long-duration missions. He added that `contingency plans’ are in place, but the current effort is focused on bringing both Sunita and Butch back to Earth on the Boeing Starliner itself.
Putting a brave face on the situation, Mark Nappi, Starliner program manager and vice president of Boeing, said, “I’m very confident we have a good vehicle [in Boeing Starliner] to bring the crew back with.”
Boeing has encountered several setbacks during the Starliner’s development. Originally contracted for $4.2 billion, the company has now spent approximately $5.7 billion, with the mission still incomplete. NASA sought a second alternative to SpaceX’s Crew Dragon, which is why Boeing Starliner was being developed. Boeing is also facing broader difficulties, with its aviation and aircraft business stumbling, and recently, Boeing CEO David Calhoun was questioned by US Senators regarding the company’s safety culture and transparency during a US Congress appearance. If the Boeing Starliner fails to return the two astronauts to Earth, it would represent a significant setback for this aerospace and space technology giant.
In a statement today, NASA said, ‘With ground testing of a Reaction Control System (RCS) thruster complete and disassembly and inspections concluding, the Starliner team is reviewing data that will aid in future missions and pave the way for NASA astronauts Butch Wilmore and Sunita Williams to return to Earth. A landing date for the Starliner Crew Flight Test (CFT) will be scheduled following the Flight Test Readiness Review planned for later next week, with landing opportunities available throughout August. Testing of the RCS thruster at NASA’s White Sands Test Facility in New Mexico yielded meaningful findings for root cause assessments and to finalise flight rationale in support of a nominal undock and landing.
The Starliner team plans to hot-fire 27 of 28 RCS thrusters this weekend while safely docked to the space station. This test aims to verify thruster performance, similar to what would be done during future missions. The team also seeks another data point on helium leaks, which have remained stable since the spacecraft’s arrival at the station on June 6. The helium system has been closed for most of the time while docked, so no helium is leaking in that configuration.
Google is dying. Google is unstoppable. Somehow, right now, it feels like both of those things are true. For the first time in more than a decade, there appear to be products that might actually threaten Google Search as the centerpiece of the web — including OpenAI’s new SearchGPT. And yet Google Search continues to dominate the market and make truly unfathomable amounts of money.
On this episode of The Vergecast, we discuss the launch of SearchGPT, Google’s latest earnings, and the increasingly brazen ways AI companies are scraping the web for their own purposes. Who will win the future of search is anyone’s guess, but one thing’s for sure: the way the web used to work doesn’t work anymore. We need new rules, new norms, and new ideas about how the internet ought to be.
After that, we talk through yet another big week of gadget news, including the revelation that Amazon’s Alexa project is a money pit of epic proportions. We also talk about our reviews of the Samsung Galaxy Ring and Z Fold 6 and the Asus ROG Ally X.
Finally, in the lightning round, we talk about Apple Maps on the web, the NBA on Prime Video, and the increasing lengths to which you have to go to stream in 4K. The future is ads, apparently — and slightly blurry ones at that.
The big Sonos app redesign was intended to make the company’s software more modern, customizable, and easier to use. But two months after its May release, it’s hard to look at this situation as anything but a colossal unforced error. Sonos has been steadily adding back missing features and functionality with frequent app updates, but the chorus of customer frustration isn’t going away.
To that end, CEO Patrick Spence today published a letter that covers the progress Sonos has made with the new app — and what customers can expect in the near future. It also contains Sonos’ first direct apology for the rough patch that “too many” users have gone through. Some customers have been waiting for that after the company’s initial responses (like saying the app overhaul took “courage”) came across as tone-deaf, given all the bugs and technical difficulties.
We’ve heard your concerns about the app update launched on May 7 and appreciate your patience as we make improvements. Please read the letter from our CEO about our progress and commitment to delivering the Sonos experience you expect and deserve: https://t.co/7Y767JBEAc
“I want to begin by personally apologizing for disappointing you,” Spence writes. “There isn’t an employee at Sonos who isn’t pained by having let you down, and I assure you that fixing the app for all of our customers and partners has been and continues to be our number one priority.”
He goes on to lay out the company’s software roadmap from now through October. Sonos is currently updating and improving the app every two weeks and has most recently addressed issues with local library playback. But some much-requested features — like the ability to edit the song queue from within the Sonos app — aren’t expected to be available until the fall.
In hindsight, it’s painfully obvious that Sonos should have released the rebuilt app as a beta for early adopters of the Sonos Ace headphones, which aren’t compatible with the previous version, and kept the existing software in place while bringing the two to parity. But apparently there’s no putting the genie back in the bottle, so now the company is working as fast as it can to make the new app deliver on everything it was designed to do.
Here’s Spence’s letter in full:
We know that too many of you have experienced significant problems with our new app which rolled out on May 7, and I want to begin by personally apologizing for disappointing you. There isn’t an employee at Sonos who isn’t pained by having let you down, and I assure you that fixing the app for all of our customers and partners has been and continues to be our number one priority.
We developed the new app to create a better experience, with the ability to drive more innovation in the future, and with the knowledge that it would get better over time. However, since launch we have found a number of issues. Fixing these issues has delayed our prior plan to quickly incorporate missing features and functionality.
Since May 7, we have released new software updates approximately every two weeks, each making significant and meaningful improvements, adding features and fixing bugs. Please see the release notes for Sonos software updates for detailed information on what has been released to date.
While these software updates have enabled the majority of our customers to have a robust experience using the Sonos app, there is more work to be done. We have prioritized the following improvements in our next phase of software updates:
July and August:
Improving the stability when adding new products
Implementing Music Library configuration, browse, search, and play
August and September:
Improving Volume responsiveness
User interface improvements based on customer feedback
Improving overall system stability and error handling
Apple Maps is finally available on the web. Through a beta that launched on Wednesday afternoon, you can now get driving and walking directions as well as view ratings and reviews from the web version of Apple Maps in a desktop or mobile browser.
Apple Maps is available through the beta.maps.apple.com site. You can do most of what you can do in the iOS version of the app, including view guides, order food directly from Maps, explore cities, and get information about businesses. Apple says it’s going to launch additional features, like Look Around, in the coming months.
The web-based version of Apple Maps is only available in English for now and is compatible with Safari and Chrome on Mac and iPad, along with Chrome and Edge on Windows PCs. Apple plans on rolling out support for other languages, browsers, and platforms in the future. Apple notes that all developers using its MapKit JS tool can link out to Maps on the web.
Since the launch of Apple Maps on the iPhone in 2012, Apple has been gradually adding new features to the service, including detailed city maps, multi-stop routing, cycling directions, EV routing, and offline navigation.
Previously, versions of Apple Maps were available on the web through the work of developers, who used the API to create maps for sites like DuckDuckGo. Its official availability on the web is a big expansion and could help the app compete with Google, which has been available from web browsers for years.
Mass layoffs and data breaches seem to dominate headlines in recent months. Now a startling new study suggests these two trends may be more closely linked than we ever imagined. Researchers from Binghamton University in collaboration with international partners, have uncovered a potential cybersecurity time bomb lurking within corporate downsizing decisions. Their findings paint a sobering picture: companies that announce layoffs may be inadvertently increasing their risk of falling victim to devastating cyberattacks.
The study, presented at the Pacific Asia Conference on Information Systems in Vietnam, comes at a critical time. In the first quarter of 2023 alone, over 136,000 employees in the United States were let go in a wave of layoffs. Tech giants like Amazon, Google, and IBM weren’t spared, leaving thousands of skilled workers suddenly jobless. But as companies tighten their belts, they may be loosening the locks on their digital vaults.
Why layoffs are linked to poor cybersecurity
So, how exactly do layoffs make a company more vulnerable to cyber threats? The researchers identify several key factors:
First, there’s the human element. Layoffs create a perfect storm of negative emotions among both departing and remaining employees. Anxiety, stress, and resentment can cloud judgment, making people more likely to cut corners on cybersecurity protocols or fall for phishing scams. In some extreme cases, disgruntled ex-employees might even be tempted to strike back by exploiting their insider knowledge of company systems.
“Some companies try to be nice by announcing layoffs first, terminating access to the laid-off employees later, but that can easily open the door to cybersecurity risks—especially if the laid-off employee is feeling vengeful,” says lead researcher Thi Tran, an Assistant Professor of Management Information Systems at Binghamton, in a statement.”Because they used to be an employee, they have confidential information about security layers that can be bypassed. The more they know about the system, the worse it could be.”
Then there’s the brain drain effect. When companies downsize, they often lose valuable cybersecurity expertise. This leaves them less equipped to fend off increasingly sophisticated attacks. Imagine a fortress suddenly losing its most experienced guards – the walls may still stand, but they’re much easier to breach.
Budget cuts accompanying layoffs can also leave cybersecurity initiatives underfunded. Companies might delay crucial software updates or scrap plans for new security measures. It’s like deciding not to fix a leaky roof to save money – you might be fine for a while, but when the big storm hits, you’ll wish you had made the investment.
Lastly, the negative publicity surrounding layoffs can make a company an attractive target for hackers. Some cybercriminals, driven by a warped sense of justice, might see a downsizing company as deserving of attack. It’s a bit like kicking someone when they’re down – morally wrong, but unfortunately all too common in the digital underworld.
How companies can prevent data breaches
The study doesn’t just sound the alarm; it also offers a potential shield. The researchers found that companies with strong corporate social responsibility (CSR) practices may be better protected from this layoff-induced cyber vulnerability. CSR encompasses a company’s efforts to operate in an ethical and sustainable manner, benefiting society beyond just making profits. Think of a company that prioritizes environmental protection, fair labor practices, or community involvement.
But how does being a “good corporate citizen” help ward off cyberattacks? The researchers suggest several possibilities. First, companies with strong CSR tend to have better relationships with their employees, potentially reducing the risk of insider threats. They might also be more likely to provide support and resources to laid-off workers, lessening feelings of resentment. Additionally, a positive public image cultivated through CSR efforts could make a company a less appealing target for hacktivists or other politically motivated attackers.
‘Humans weakest link of IT security chain’
This research serves as a wake-up call for business leaders navigating tough economic times. While layoffs might seem like a quick fix for financial woes, they could be opening the door to even costlier cyber disasters. An IBM Cost of Data Breach report in 2023 revealed that the average data breach cost companies a staggering $4.5 million – a 15% increase from the previous three years. This price tag could easily wipe out any short-term savings from workforce reduction.
Associate Professor Sumantra Sarkar, who is helping conduct the research, puts this in perspective: “In the old days, industries were more manual-oriented, and you could not replace people with the click of a button, but in the current information technology world, you hire people by the thousands, and you can lay off people much the same way. This opens the door for our research because humans are statistically the weakest link of the IT security chain.”
The message is clear: cybersecurity can’t be an afterthought, even (or especially) during times of corporate belt-tightening. Companies considering layoffs need to factor in the potential cybersecurity risks and take proactive measures to mitigate them. This might involve strengthening security protocols, providing extra support and training for remaining employees, and maintaining robust CSR initiatives even in the face of budget pressures.
In the darkness of the deep ocean, where sunlight can’t reach, scientists have stumbled upon a strange phenomenon — the production of oxygen where none should exist. An international team says the unexpected discovery of what they’re calling “dark oxygen” challenges our understanding of deep-sea ecosystems and could have far-reaching implications for ocean chemistry and climate science.
The team, led by Andrew K. Sweetman from The Scottish Association for Marine Science, conducted experiments nearly four kilometers (roughly 2.5 miles) beneath the surface of the Pacific Ocean. Their study area, known as the Clarion-Clipperton Zone (CCZ), is a vast expanse of seafloor covered in potato-sized lumps called polymetallic nodules. These nodules, rich in valuable metals like manganese, nickel, and copper, have attracted interest from mining companies eager to exploit their potential.
It’s not the promise of mineral wealth that has scientists excited, however, it’s what’s happening around these nodules that’s truly groundbreaking.
Typically, in the deep sea, oxygen is consumed as organisms breathe and decompose organic matter. Scientists can measure this oxygen consumption to understand the health and activity of deep-sea ecosystems. However, when Sweetman and his team placed experimental chambers on the seafloor to measure oxygen levels, they observed something entirely unexpected: oxygen levels were increasing, not decreasing.
Over the course of two days, oxygen concentrations in some chambers more than tripled. This “dark oxygen production,” as the researchers termed it in the journal Nature Geoscience, occurred without any sunlight – the driving force behind oxygen production through photosynthesis in surface waters and on land.
“For aerobic life to begin on the planet, there had to be oxygen, and our understanding has been that Earth’s oxygen supply began with photosynthetic organisms,” says Sweetman, who leads the Seafloor Ecology and Biogeochemistry research group at SAMS, in a media release. “But we now know that there is oxygen produced in the deep sea, where there is no light. I think we, therefore, need to revisit questions like: Where could aerobic life have begun?”
The discovery raises intriguing questions about the source of this mysterious oxygen. After ruling out experimental errors and known biological processes, the team began to suspect the nodules themselves might be involved.
Further investigation revealed that the nodules possess an electrical potential, acting somewhat like natural batteries. The researchers measured voltage differences of up to 0.95 volts between different points on nodule surfaces – approaching the 1.23 volts theoretically required to split water molecules into hydrogen and oxygen.
The significant electricity consumption of these companies calls for ongoing discussions around sustainability and renewable energy adoption.
Tech giants Google and Microsoft each consumed 24 TWh of electricity in 2023, surpassing the consumption of more than 100 countries, new research claims.
Analysis by Michael Thomas, shared on X, states that both Google and Microsoft used the same amount of energy as Azerbaijan, which has a GDP of $78.7 billion. In 2023, Google’s revenue was $307.4 billion, and Microsoft’s was $211.9 billion, Tech Radar reported.
This massive energy usage underscores the significant environmental impact of these companies but also highlights their potential to lead more sustainable initiatives.
For comparison, Iceland, Ghana, the Dominican Republic, and Tunisia each consumed 19 TWh, while Jordan consumed 20 TWh. Libya (25 TWh) and Slovakia (26 TWh) used slightly more power.
The comparison between entire countries and two single companies emphasizes the colossal energy demands of Big Tech. It also points to the environmental impacts of data centers that power cloud services and the new generation of artificial intelligence.
For comparison, Iceland, Ghana, the Dominican Republic, and Tunisia each consumed 19 TWh, while Jordan consumed 20 TWh. Libya (25 TWh) and Slovakia (26 TWh) used slightly more power.
The comparison between entire countries and two single companies emphasizes the colossal energy demands of Big Tech. It also points to the environmental impacts of data centers that power cloud services and the new generation of artificial intelligence.
Google is planning to keep third-party cookies in its Chrome browser, it said on Monday, after years of pledging to phase out the tiny packets of code meant to track users on the internet.
The major reversal follows concerns from advertisers – the company’s biggest source of income – saying the loss of cookies in the world’s most popular browser will limit their ability to collect information for personalizing ads, making them dependent on Google’s user databases.
The UK’s Competition and Markets Authority had also scrutinized Google’s plan over concerns it would impede competition in digital advertising.
“Instead of deprecating third-party cookies, we would introduce a new experience in Chrome that lets people make an informed choice that applies across their web browsing, and they’d be able to adjust that choice at any time,” Anthony Chavez, vice president of the Google-backed Privacy Sandbox initiative, said in a blog post.
As humans wait to find out who will be the first person to land on Mars, a new study may have identified the first plant that will help colonize Mars. It’s not some hybrid flower or genetically engineered organism — it’s moss! Specifically, researchers say a desert-dwelling moss called Syntrichia caninervis can survive extreme conditions that would kill most other plants.
In a new study published in The Innovation, researchers put this tiny but tough plant through a gauntlet of tests simulating the harsh Martian environment. The results suggest S. caninervis could potentially grow on Mars, paving the way for future human settlements.
“Our study shows that the environmental resilience of S. caninervis is superior to that of some of highly stress-tolerant microorganisms and tardigrades,” according to the research team, who include ecologists Daoyuan Zhang and Yuanming Zhang and botanist Tingyun Kuang of the Chinese Academy of Sciences, in a media release. “S. caninervis is a promising candidate pioneer plant for colonizing extraterrestrial environments, laying the foundation for building biologically sustainable human habitats beyond Earth.”
S. caninervis is found in some of Earth’s most inhospitable places, including scorching deserts and frigid mountain peaks. It forms crusty mats on the soil surface, helping to prevent erosion and retain precious water in arid regions.
The researchers subjected the moss to a battery of extreme conditions in the lab. They found it could lose over 98% of its water content and spring back to life within seconds of being rehydrated. It survived being frozen at -112°F for five years. Perhaps most impressively, it withstood radiation levels 2,000 times higher than what’s lethal to humans.
Finally, the team placed S. caninervis in a special chamber simulating multiple aspects of the Martian environment simultaneously – including low atmospheric pressure, extreme temperature swings, and intense UV radiation. The hardy moss survived up to a week in these Mars-like conditions.
“Our results indicate that S. caninervis is among the most radiation-tolerant organisms known,” the researchers write.
Those adaptations include tightly overlapping leaves that conserve water, and hair-like structures on the leaf tips that reflect excess sunlight. At the cellular level, the moss can rapidly shut down its metabolism when conditions are unfavorable, then quickly reactivate when things improve.
While no plant could survive indefinitely on the Martian surface as-is, S. caninervis could potentially grow in sheltered microhabitats or in greenhouse structures with some environmental controls. As a “pioneer species,” it could help transform the alien Martian landscape into something more hospitable for other organisms.
“Although there is still a long way to go to create self-sufficient habitats on other planets, we demonstrated the great potential of S. caninervis as a pioneer plant for growth on Mars,” the researchers conclude. “Looking to the future, we expect that this promising moss could be brought to Mars or the Moon to further test the possibility of plant colonization and growth in outer space.”
Of course, considerable technological hurdles remain before we can seriously contemplate Martian moss gardens. But this research highlights how Earth’s most resilient organisms might one day help us colonize other worlds. The first Martian settlers may find an unlikely ally in the humble moss.
Elon Musk, known for his outspoken views on artificial intelligence (AI), recently shared a video on his microblogging account that has sparked intrigue and amusement online. The video, created using AI animation techniques, portrays world leaders and prominent figures strutting down a virtual runway like fashion models.
Among those featured in the video are Joe Biden, the President of the United States; Kim Jong Un, Supreme Leader of North Korea; Donald Trump, former President of the United States; Barack Obama, former US President; Vladimir Putin, President of Russia; Xi Jinping, President of China; Mark Zuckerberg, CEO of Meta (formerly Facebook); Elon Musk himself, CEO of Tesla and SpaceX; Nancy Pelosi, Speaker of the United States House of Representatives; Hillary Clinton, former United States Secretary of State and former First Lady; and Pope Francis.
Each leader is depicted in distinct and varied outfits, adding a surreal twist to their usual public personas.
Musk added a caption to the video, “High time for an AI fashion show,” indicating a playful take on the potential applications of AI beyond its traditional roles.
The video quickly gained traction online, amassing over 1 million views within just 30 minutes of being shared. It has since continued to attract attention and commentary across social media platforms, with many users marvelling at the creativity and technological prowess behind the AI-generated production.
Earlier on July 20, Elon Musk shared a video on X that sparked a debate online. The video, produced using AI technology, portrayed a fictional narrative tied to recent elections, featuring animated depictions of prominent figures like Donald Trump, Joe Biden, Barack Obama, and Mark Zuckerberg.
Security experts said CrowdStrike’s (CRWD.O), opens new tab routine update of its widely used cybersecurity software, which caused clients’ computer systems to crash globally on Friday, apparently did not undergo adequate quality checks before it was deployed.
The latest version of its Falcon sensor software was meant to make CrowdStrike clients’ systems more secure against hacking by updating the threats it defends against. But faulty code in the update files resulted in one of the most widespread tech outages in recent years for companies using Microsoft’s (MSFT.O), opens new tab Windows operating system.
Global banks, airlines, hospitals and government offices were disrupted. CrowdStrike released information to fix affected systems, but experts said getting them back online would take time as it required manually weeding out the flawed code.
“What it looks like is, potentially, the vetting or the sandboxing they do when they look at code, maybe somehow this file was not included in that or slipped through,” said Steve Cobb, chief security officer at Security Scorecard, which also had some systems impacted by the issue.
Problems came to light quickly after the update was rolled out on Friday, and users posted pictures on social media of computers with blue screens displaying error messages. These are known in the industry as “blue screens of death.”
Patrick Wardle, a security researcher who specialises in studying threats against operating systems, said his analysis identified the code responsible for the outage.
The update’s problem was “in a file that contains either configuration information or signatures,” he said. Such signatures are code that detects specific types of malicious code or malware.
“It’s very common that security products update their signatures, like once a day… because they’re continually monitoring for new malware and because they want to make sure that their customers are protected from the latest threats,” he said.
The frequency of updates “is probably the reason why (CrowdStrike) didn’t test it as much,” he said.
It’s unclear how that faulty code got into the update and why it wasn’t detected before being released to customers.
“Ideally, this would have been rolled out to a limited pool first,” said John Hammond, principal security researcher at Huntress Labs. “That is a safer approach to avoid a big mess like this.”
Other security companies have had similar episodes in the past. McAfee’s buggy antivirus update in 2010 stalled hundreds of thousands of computers.
While CrowdStrike swiftly released information to fix affected systems, experts warned that full recovery would be time-consuming.
A routine update to CrowdStrike’s widely used cybersecurity software backfired spectacularly on Friday, triggering a global IT outage that crippled businesses, airlines, and government agencies. Security experts are now pointing to inadequate quality checks as a potential cause for the widespread disruption.
The faulty update to CrowdStrike’s Falcon Sensor software, intended to bolster security against emerging threats, contained flawed code that caused widespread system crashes on Windows-based computers. The outage’s impact was felt globally, with banks, airlines, hospitals, and even government offices experiencing significant disruptions.
While CrowdStrike swiftly released information to fix affected systems, experts warned that full recovery would be time-consuming, requiring manual removal of the flawed code.
“What it looks like is, potentially, the vetting or the sandboxing they do when they look at code, maybe somehow this file was not included in that or slipped through,” said Steve Cobb, chief security officer at Security Scorecard, whose own systems were impacted.
The problem became apparent soon after the update was rolled out, with users flooding social media with images of the dreaded “blue screen of death” (BSOD) accompanied by error messages.
Security researcher Patrick Wardle traced the outage to a file in the update containing configuration information or signatures used to detect malicious code. He speculated that the frequency of such updates might have led to insufficient testing: “It’s very common that security products update their signatures, like once a day… because they’re continually monitoring for new malware and because they want to make sure that their customers are protected from the latest threats… The frequency of updates ‘is probably the reason why (CrowdStrike) didn’t test it as much,’ he said.”
Experts criticised the lack of a phased rollout for the update. “Ideally, this would have been rolled out to a limited pool first,” said John Hammond, principal security researcher at Huntress Labs told Reuters. “That is a safer approach to avoid a big mess like this.”
This incident underscores the potential for catastrophic consequences when security updates, intended to protect systems, contain undetected flaws. It also highlights the need for robust quality control measures and cautious deployment strategies to prevent similar widespread outages in the future.
Srirang Srikantha, Founder & CEO, Yethi Consulting said, “The outages represent how fragile and interconnected our systems are. Companies like MSFT have great practices, and the fact that a bug passes through its process is unfortunate. It reiterates the need for good practices of testing before releasing new software to production systems.”
Sundareshwar K, Partner & Leader – Cybersecurity, PwC India commented, “This is a black swan event impacting not just businesses but the overall national machinery, and underscores how safeguarding entities against risk involves much more than technology… This development highlights how it is a misnomer that enhanced technology deployment alone will help organisations become more secure and ensure business continuity. While organisations work towards remediation of the current situation, the focus should be on rethinking risks and moving beyond the layers, patches, products and tools to building an inherently strong cyber architecture with complementary interventions that ensure resilience in the face of such unforeseen technology setbacks or failures.”
Athenian Tech said in a statement, “The recent CrowdStrike Falcon sensor incident highlights significant vulnerabilities and operational risks with automatic security updates, leading to widespread system failures, especially in enterprise environments. This underscores the need for rigorous testing and controlled deployment strategies of software updates. While CrowdStrike is addressing the issue, this incident emphasises the importance of balancing robust security with system stability and adopting best practices for software updates to prevent similar incidents in the future.”
Piyush Goel – Founder & CEO of Beyond Key said, “The complex interactions between CrowdStrike’s update and Microsoft’s infrastructure were likely unforeseen. CrowdStrike quickly identified the bug and rolled back the update, while CERT-In provided guidelines for users to delete the problematic file. This incident underscores the need for diverse and well-tested cybersecurity solutions to prevent similar large-scale outages in the future.”
The global impact of this outage is a testament to CrowdStrike’s widespread adoption, with its software used by over half of Fortune 500 companies and numerous government agencies, including the US Cybersecurity and Infrastructure Security Agency (CISA).
CrowdStrike’s Response
Here’s a breakdown of what happened, according to CrowdStrike’s official statements:
The Timeline:
July 19, 2024, 04:09 UTC: CrowdStrike released a sensor configuration update to Windows systems as part of its ongoing security operations.
July 19, 2024, 05:27 UTC: The faulty update was remediated.
The Impact:
Windows systems running Falcon Sensor version 7.11 and above that were online and downloaded the update between 04:09 UTC and 05:27 UTC were susceptible to a system crash.
Systems running Linux or macOS were not affected.
The Technical Details:
The issue stemmed from a flawed update to “Channel File 291,” a configuration file that dictates how Falcon Sensor evaluates named pipe execution on Windows systems.
The update aimed to target malicious named pipes used in cyberattacks but triggered a logic error, leading to operating system crashes.
CrowdStrike has since corrected the logic error and updated Channel File 291.
Meta is planning to spend billions to buy roughly 5 percent of EssilorLuxottica, the €88 billion European eyewear giant it has collaborated with on two generations of Ray-Ban smart glasses. The Financial Times and other outlets earlier reported news of the talks, which I’ve also confirmed. I’m told the negotiations are advanced and it’s likely Meta will make the investment.
It turns out that Meta isn’t the only big tech company trying to get closer to EssilorLuxottica, however. Google recently approached the company’s leadership about putting its Gemini AI assistant in future smart glasses, several people familiar with the matter tell me. This move to potentially box Meta out of the high-profile partnership could be helping drive the large investment Mark Zuckerberg is now preparing to make. But there’s more to the situation to consider.
The flag, which was seen flying on the airless Moon, courted several controversies to the extent that some even denied the Moon landing ever happened. But, there was science behind it.
Fifty-five years ago, millions around the world were glued to their television screens as two humans prepared to set foot on an uncharted world.
Over 3,80,000 kilometres away, the lander began its descent. Minutes later, the historic call came: “On Tranquillity Base, the Eagle has landed.”
Humanity had reached the Moon. On July 20, 1969, astronaut Neil Armstrong made history as he stepped down the ladder and onto the lunar surface, followed closely by astronaut Buzz Aldrin.
The two spent the next few hours collecting samples, basking in the sunlight of the powdery landscape, and exploring the Sea of Tranquillity, Apollo 11’s landing site. They also proudly planted an American flag on the Moon’s surface.
The flag, which was seen flying on the airless Moon, later courted several controversies to the extent that some even denied the Moon landing ever happened. But, there was science behind it.
FLYING A FLAG ON THE MOON
Nasa engineers faced a unique challenge when designing the American flag that would be planted on the Moon during the Apollo 11 mission.
The task of creating a flag that could “fly” in the airless lunar environment fell to Jack Kinzler, chief of technical services at Nasa’s Manned Spacecraft Center (now Johnson Space Center).
Kinzler and his team devised an ingenious solution to make the flag appear to wave on the Moon’s surface. They created a telescoping flagpole with a horizontal crossbar at the top.
The flag was attached to this crossbar and hemmed along the top edge to create a sleeve. This design allowed the flag to be extended outward, giving the impression of a flag flying in a breeze, despite the lack of atmosphere on the Moon.
The flagpole itself was made of anodised aluminium tubing, chosen for its lightweight properties and durability in extreme temperatures. The team also had to consider the flag’s material, opting for a nylon fabric that could withstand the harsh lunar environment and the intense sunlight.
Services from airlines to healthcare, shipping and finance were coming back online on Friday after a mistake in a security software update sparked hours-long global computer systems outages, another incident highlighting the vulnerability of the world’s interconnected technologies.
After the outage was resolved, companies were dealing with backlogs of delayed and canceled flights and medical appointments, missed orders and other issues that could take days to resolve. Businesses also face questions about how to avoid future blackouts triggered by technology meant to safeguard their systems.
A software update by global cybersecurity firm CrowdStrike (CRWD.O), opens new tab, one of the largest operators in the industry, triggered systems problems that grounded flights, forced broadcasters off air and left customers without access to services such as healthcare or banking. Global shipper FedEx (FDX.N), opens new tab faced major disruptions and some moderators who police content on Meta’s Facebook were hit.
CrowdStrike is not a household name but it is an $83 billion company with more than 20,000 subscribers around the world including Amazon.com (AMZN.O), opens new tab and Microsoft (MSFT.O), opens new tab. CrowdStrike CEO George Kurtz said on social media platform X that a defect was found “in a single content update for Windows hosts” that affected Microsoft customers.
“We’re deeply sorry for the impact that we’ve caused to customers, to travelers, to anyone affected by this, including our company,” Kurtz told NBC News.
CrowdStrike has one of the largest shares of the highly competitive cybersecurity market, leading some industry analysts to question whether control over such operationally critical software should remain with just a handful of companies.
The outage also raised concerns that many organizations are not well prepared to implement contingency plans when a single point of failure such as an IT system, or a piece of software within it, goes down. But these outages will happen again, experts say, until more contingencies are built into networks and organizations introduce better back-ups.
CrowdStrike shares closed down 11%. Its rivals SentinelOne (S.N), opens new tab shares closed up 8% and Palo Alto Networks (PANW.O), opens new tab closed up 2%. Microsoft closed down 0.7%.
The scale of the outage was massive, but not yet quantifiable because it involved only systems that were running CrowdStrike software, said Ann Johnson, who heads Microsoft’s security and compliance business.
“We have hundreds of engineers right now working directly with CrowdStrike to get customers back online,” she said.
President Joe Bidenwas briefed on the outage, a White House official said. The U.S. Cybersecurity and Infrastructure Security Agency said it observed hackers using the outage for phishing and other malicious activities.
U.S. Customs and Border Protection said it was experiencing processing delays and working to mitigate issues related to international trade and travel. The Dutch and United Arab Emirates’ foreign ministries also reported disruptions.
“This event is a reminder of how complex and intertwined our global computing systems are and how vulnerable they are,” said Gil Luria, senior software analyst at D.A. Davidson.
“CrowdStrike and Microsoft will have a lot of work to do to make sure that it won’t allow other systems and products to cause this kind of failure in the future,” he said.
Wall Street’s main indexes fell on Friday, deepening a sell-off driven by tech stocks and mixed earnings. The Cboe Volatility index (.VIX), opens new tab, known as Wall Street’s “fear gauge,” hit its highest level since early May, and the dollar climbed as the worldwide cyber outage unnerved investors.
THOUSANDS OF FLIGHTS CANCELED
Air travel was immediately hit, because carriers depend on smooth scheduling that, when interrupted, can ripple into lengthy delays. Out of more than 110,000 scheduled commercial flights on Friday, 5,000 were canceled globally with more expected, according to aviation analytics firm Cirium.
Delta Air Lines (DAL.N), opens new tab was one of the hardest hit, with 20% of its flights canceled, according to flight tracking service FlightAware. The U.S. carrier said it expected additional delays and cancelations potentially through the weekend.
ChatGPT maker OpenAI said on Thursday it was launching GPT-4o mini, a cost-efficient small AI model, aimed at making its technology more affordable and less energy-intensive, allowing the startup to target a broader pool of customers.
Microsoft-backed (MSFT.O), opens new tab OpenAI, the market leader in the AI software space, has been working to make it cheaper and faster for developers to build applications based on its model, at a time when deep-pocketed rivals such as Meta (META.O), opens new tab and Google (GOOGL.O), opens new tab rush to grab a bigger share in the market.
Priced at 15 cents per million input tokens and 60 cents per million output tokens, the GPT-4o mini is more than 60% cheaper than GPT-3.5 Turbo, OpenAI said.
It currently outperforms the GPT-4 model on chat preferences and scored 82% on Massive Multitask Language Understanding (MMLU), OpenAI said.
MMLU is a textual intelligence and reasoning benchmark used to evaluate the capabilities of language models. A higher MMLU score signifies it can understand and use language better across a variety of domains, enhancing real-world usage.
The GPT-4o mini model’s score compared with 77.9% for Google’s Gemini Flash and 73.8% for Anthropic’s Claude Haiku, according to OpenAI.
Smaller language models require less computational power to run, making them a more affordable option for companies with limited resources looking to deploy generative AI in their operations.
With the mini model currently supporting text and vision in the application programming interface, OpenAI said support for text, image, video and audio inputs and outputs would be made available in the future.
When it comes to promising forms of energy, nuclear fusion checks all the boxes: it’s clean, abundant, continuous and safe. It’s produced when the lightweight nuclei of two atoms fuse together to form a heavier nucleus, releasing large amounts of energy in the process. For fusion reactions to occur in a controlled manner, huge reactors are needed in the form of giant rings, which are filled with magnets to create magnetic fields where atomic particles buzz around and dance like a swarm of bees. Hard to picture? The good news is that you can now view a live simulation of this kind of reactor – called a tokamak – thanks to stunningly realistic 3D visualization technology.
At EPFL, the Laboratory for Experimental Museology (EM+) specializes in this technology and has developed a program that turns the terabytes of data generated from the tokamak simulations and testing carried out by EPFL’s Swiss Plasma Center (SPC) into an immersive 3D visualization experience. For the general public, the visualization is a journey into a ring of fireworks illustrating a possible future source of energy; for scientists, it’s a valuable tool that renders the complex phenomena of quantum physics tangible and helps them grasp the results of their calculations.
Images so precise they show wear and tear
The 3D visualization – a panorama measuring 4 meters high and 10 meters in diameter – is a faithful reproduction of the interior of EPFL’s variable-configuration tokamak (TCV), rendered in such stunning detail that it rivals even the best-quality gaming experience. The experimental reactor [CS1] was built over 30 years ago and still the only one of its kind in the world. “We used a robot to generate ultra-high-precision scans of the reactor interior, which we then compiled to produce a 3D model that replicates its components right down to their texture,” says Samy Mannane, a computer scientist at EM+. “We were even able to capture the wear and tear on the graphite tiles lining the reactor walls, which are subject to extremely high temperatures during test runs of the TCV.”
SPC engineers provided equations for calculating exactly how the quantum particles move at a given point in time. The EM+ researchers then incorporated these equations, along with reactor data, into their 3D visualization system. The catch is that all the calculations have to be carried out in real time. “To produce just a single image, the system has to calculate the trajectories of thousands of moving particles at a speed of 60 times per second for each eye,” says Mannane. This hefty number-crunching is carried out by five computers with 2 GPUs each that EM+ acquired for this project. The computers’ output is fed into the panorama’s five 4k projectors. “We were able to build our system thanks to advances in infographics technology,” says Sarah Kenderdine, the professor who heads EM+. “It would’ve been impossible even just five years ago.”
The result is realistic images of mind-blowing quality. You can see the injection device that deposits particles into the tokamak as well as the graphite tiles capable of withstanding temperatures of over 100 million degrees Celsius. And the scale of all this is impressive. To give viewers an idea, the visualization includes an image of a human being – the reactor is roughly twice their size. As the simulation ramps up, the viewer feels quite small as thousands of particles zip by, spinning and twirling and chasing each other. Electrons are in red; protons are in green; and blue lines indicate the magnetic field. Users can adjust any of the parameters to view a specific part of the reactor at a chosen angle, with almost perfect rendering.
SPC director Paolo Ricci explains: “Visualization techniques are fairly advanced in astrophysics, owing largely to planetariums. But in nuclear fusion, we’re just starting to use this technology – thanks notably to the work we’re doing with EM+.” Drawing on SPC’s excellence in this area, EPFL is taking part in the International Thermonuclear Experimental Reactor (ITER) project and is a key member of the EUROfusion consortium. In fact, EPFL was chosen to house one of the consortium’s five Advanced Computing Hubs, giving the researchers involved in this EU-funded project an advanced tool for visualizing their work.
Combining output and art
Kenderdine says the biggest challenge was to “extract tangible information from such a huge database to produce a visualization that’s accurate, coherent and ‘real’ – even if it’s virtual. The result is extraordinary, and I would even say beautiful, and it gives scientists a useful tool that opens up a range of possibilities.”
“The physics behind the visualization process is extremely complicated,” says Ricci. “Tokamaks have many different moving parts: particles with heterogenous behavior, magnetic fields, waves for heating the plasma, particles injected from the outside, gases, and more. Even physicists have a hard time sorting everything out. The visualization developed by EM+ combines the standard output of simulation programs – basically, tables of numbers – with real-time visualization techniques that the lab uses to create a video-game-like atmosphere.”
Just hours after revealing the Pixel 9 Pro, Google is showing off the Pixel 9 Pro Fold with a brief, Gemini-linked teaser video. As is standard for Pixel phones, the devices have been leaked all over the internet already, but at least now we can end speculation about the look or name of Google’s next foldable Android phone.
Google’s teaser video clearly shows double-stacked lenses for the rear camera module as well as the outside screen and hinge.
For confirmation of other specs and details about upgrades from the first Fold, we may have to wait for Google’s hardware event on August 13th. There, we’ll see this phone, the 9 Pro, the Pixel Watch 3, and whatever else is left.
Microsoft’s AI-powered Designer app is coming out of preview today for both iOS and Android users. Microsoft Designer lets you use templates to create custom images, stickers, greeting cards, invitations, and more. Designer can also use AI to edit images and restyle them or create collages of images.
Originally available on the web or through Microsoft Edge, Designer has been in preview for nearly a year. It’s now generally available to anyone with a personal Microsoft account and as a free app for Windows, iOS, and Android. The mobile app includes the ability to create images and edit them on the go.
Microsoft Designer includes the usual text prompt for generating images, but there is also a big selection of templates you can use to create things like greeting cards, social media posts, icons, wallpapers, coloring book pages, and much more. Designer also includes an avatar creator, which it prompts you to use on the mobile version of the app.
You can also use Designer to edit images with AI, allowing you to restyle existing images or frame them with decorative AI-generated borders. Designer also includes the ability to edit and remove backgrounds, remove people or objects from images, and features like adding text and branding to images.
While Microsoft Designer is available as a standalone app today, Microsoft has also been making Designer available through Copilot in apps like Word and PowerPoint. Copilot Pro subscribers can create images and designs right within Word and PowerPoint, and Microsoft is adding a new banner image generator for Word documents soon.
Windows Insiders will also get access to Designer within the Photos app on Windows 11 today, with features like erasing objects, removing backgrounds, auto-cropping, and filters all available directly in Photos. Microsoft had been testing sending images from Photos to Designer, but it’s now integrating the features into Photos so you don’t have to leave the app. Similar features are also coming to Microsoft Edge, soon.
Fei-Fei Li, the renowned computer scientist known as the “godmother of AI,” has created a startup dubbed World Labs. In just four months, its already valued at more than $1 billion, the Financial Times reported.
World Labs hopes to use human-like processing of visual data to make AI capable of advanced reasoning, Reuters reported in May. The research to make it human-like, much like what ChatGPT is trying to do with generative AI, is still ongoing.
Li is best known for her contributions to computer vision, a branch of AI dedicated to helping machines interpret and comprehend visual information. She also spearheaded the development of ImageNet, an extensive visual database used for visual object recognition research. Li headed AI at Google Cloud from 2017 to 2018 and currently advises the White House task force on AI.
“[World Labs] is developing a model that understands the three-dimensional physical world; essentially the dimensions of objects, where things are and what they do,” an anonymous venture capitalist with knowledge of Li’s work told the Financial Times.
The startup has had two funding rounds, the latest was about $100 million, and it’s backed by Andreessen Horowitz and the AI fund Radical Ventures (which Li joined as a partner last year). Li founded World Labs while on partial leave from Stanford, where she co-directs the university’s Human-Centered AI Institute.
In a Ted Talk in April, Li further explained the field of research her startup will work on advancing, which involves algorithms capable of realistically extrapolating images and text into three-dimensional environments and acting on those predictions, using a concept known as “spatial intelligence.” This could bolster work in various fields such as robotics, augmented reality, virtual reality, and computer vision. If these capabilities continue to advance in the ambitious ways Li plans, it has the potential to transform industries like healthcare and manufacturing.
The findings suggest there could be hundreds of pits on the moon and thousands of lava tubes.
An international team of scientists has confirmed the existence of a sizable cave on the moon, potentially paving the way for future lunar habitats. The cave is located just 400 kilometres from the Apollo 11 landing site in the Sea of Tranquility, where astronauts Neil Armstrong and Buzz Aldrin made their historic moonwalk 55 years ago.
The discovery was made by an Italian-led research team who analysed radar data collected by NASA’s Lunar Reconnaissance Orbiter (LRO). Their findings, published Monday in the journal Nature Astronomy, reveal that the Mare Tranquillitatis pit, the deepest known pit on the Moon, leads to a cave approximately 45 meters wide and up to 80 meters long. The cave, which lies about 150 meters beneath the lunar surface, is roughly equivalent in area to 14 tennis courts.
Lorenzo Bruzzone from the University of Trento in Italy described the cave as “probably an empty lava tube,” suggesting that such features could serve as natural shelters for future lunar explorers. These underground structures could provide protection from the moon’s harsh environment, including cosmic rays, solar radiation, and micrometeorites, while maintaining relatively stable temperatures inside.
“Lunar caves have remained a mystery for over 50 years. So it was exciting to be able to finally prove the existence” of one, Leonardo Carrer and Lorenzo Bruzzone of the University of Trento, wrote in an email, reported Associated Press.
The research indicates that this cave, along with the more than 200 other pits identified on the moon, likely formed through the collapse of lava tubes—large underground tunnels created by ancient volcanic activity. These lava tubes could offer significant advantages for establishing lunar bases, as they require fewer construction efforts and provide inherent structural protection.
“Lunar orbiters first spotted pits on the moon more than a decade ago,” said Leonardo Carrer, the study’s first author. “Many are thought to be skylights that connect to underground caves, such as lava tubes.”
New observations reveal neutron stars paired with stars like our Sun
Most stars in our universe come in pairs. While our own Sun is a loner, many stars like our Sun orbit similar stars, while a host of other exotic pairings between stars and cosmic orbs pepper the universe. Black holes, for example, are often found orbiting each other. One pairing that has proved to be quite rare is that between a Sun-like star and a type of dead star called a neutron star.
Now, astronomers led by Caltech’s Kareem El-Badry have uncovered what appear to be 21 neutron stars in orbit around stars like our Sun. Neutron stars are dense burned-out cores of massive stars that exploded. On their own, they are extremely faint and usually cannot be detected directly. But as a neutron star orbits around a Sun-like star, it tugs on its companion, causing the star to shift back and forth in the sky. Using the European Space Agency’s Gaia mission, the astronomers were able to catch these telltale wobbles to reveal a new population of dark neutron stars.
“Gaia is continuously scanning the sky and measuring the wobbles of more than a billion stars, so the odds are good for finding even very rare objects,” says El-Badry, an assistant professor of astronomy at Caltech and an adjunct scientist at the Max Planck Institute for Astronomy in Germany.
The new study, which includes a team of co-authors from around the world, was published in The Open Journal for Astrophysics. Data from several ground-based telescopes, including the W. M. Keck Observatory on Maunakea, Hawai‘i; La Silla Observatory in Chile; and the Whipple Observatory in Arizona, were used to follow up the Gaia observations and learn more about the masses and orbits of the hidden neutron stars.
While neutron stars have previously been detected in orbit around stars like our Sun, those systems have all been more compact. With little distance separating the two bodies, a neutron star (which is heavier than a Sun-like star) can steal mass away from its partner. This mass transfer process makes the neutron star shine brightly at X-ray or radio wavelengths. In contrast, the neutron stars in the new study are much farther from their partners—on the order of one to three times the distance between Earth and the Sun.
That means the newfound stellar corpses are too far from their partners to be stealing material from them. They are instead quiescent and dark. “These are the first neutron stars discovered purely due to their gravitational effects,” El-Badry says.
The discovery comes as somewhat of a surprise because it is not clear how an exploded star winds up next to a star like our Sun.
“We still do not have a complete model for how these binaries form,” explains El-Badry. “In principle, the progenitor to the neutron star should have become huge and interacted with the solar-type star during its late-stage evolution.” The huge star would have knocked the little star around, likely temporarily engulfing it. Later, the neutron star progenitor would have exploded in a supernova, which, according to models, should have unbound the binary systems, sending the neutron stars and Sun-like stars careening in opposite directions.
“The discovery of these new systems shows that at least some binaries survive these cataclysmic processes even though models cannot yet fully explain how,” he says.
Gaia was able to find the unlikely companions due to their wide orbits and long periods (the Sun-like stars orbit around the neutron stars with periods of six months to three years). “If the bodies are too close, the wobble will be too small to detect,” El-Badry says. “With Gaia, we are more sensitive to the wider orbits.” Gaia is also most sensitive to binaries that are relatively nearby. Most of the newly discovered systems are located within 3,000 light-years of Earth—a relatively small distance compared, for example, to the 100,000 light-year-diameter of the Milky Way Galaxy.
The new observations also suggest just how rare the pairings are. “We estimate that about one in a million solar-type stars is orbiting a neutron star in a wide orbit,” he said.
El-Badry also has an interest in finding unseen dormant black holes in orbit with Sun-like stars. Using Gaia data, he has found two of these quiet black holes hidden in our galaxy. One, called Gaia BH1, is the closest known black hole to Earth at 1,600 light-years away.
The moon is hostile to human life and its surface is exposed to powerful levels of cosmic radiation – but experts believe the underground caves could be “suitable for habitation purposes”.
An underground cave stretching tens of metres below an open pit on the moon has been discovered which could be a potential base for future astronauts, say scientists.
This is the first lunar tunnel to be found which could be accessible to humans, according to researchers.
The hollowed passage lies beneath a pit about 100 metres wide in the Sea Of Tranquillity, a dark region on the near side of the moon which is visible with the naked eye.
The region is also where the first humans on the moon, Neil Armstrong and Buzz Aldrin, touched down in 1969.
The cave is estimated to be 30-80m long (98-262ft), around 45m wide (147ft) and 130-170m (436-557ft) below the surface.
Previous caves found on the moon do not feature any entry points, the scientists added.
The “milestone discovery” comes as NASA prepares to send its first crewed mission to the moon in more than 50 years.
The moon is hostile to human life and its surface is exposed to powerful levels of cosmic radiation – but experts believe the underground caves could be “suitable for habitation purposes”.
Leonardo Carrer, an assistant professor at University Of Trento in Italy, said: “For the first time, we have located and accurately mapped a cave that is actually accessible from a pit on the lunar surface.
“We were able to obtain the first 3D model of a part of the cave’s actual shape.
“Building a base on the surface of the moon requires highly complex engineering solutions, which may be less effective than what is already provided by nature.”
Lorenzo Bruzzone, a professor at the university, added: “These caves have been theorised for over 50 years, but it is the first time ever that we have demonstrated their existence.”
The moon’s surface is exposed to cosmic radiation that is up to 150 times more powerful than Earth.
The surface is also vulnerable to frequent meteorite impacts and extreme temperatures, ranging from 127C to -173C.
Previous research has suggested underground caves have an average temperature of around 17C, creating suitable conditions for astronauts.
Commenting on the study, Mahesh Anand, professor of planetary science and exploration at the Open University, said: “The future exploration of the moon through extended human presence would require protection from the harsh environment and micrometeoroid impacts.
“In that context, these underground structures could provide a suitable location for habitation purposes.”
Apple’s (AAPL.O), opens new tab shares rose 2.5% to a record high on Monday after Morgan Stanley raised its price target on the iPhone maker’s shares and designated the stock as a “top pick,” citing the company’s AI efforts as a boost to device sales.
In what was seen as a move to catch up with Alphabet’s (GOOGL.O), opens new tab Google and Microsoft-backed (MSFT.O), opens new tab OpenAI, the iPad maker last month unveiled Apple Intelligence, luring customers to upgrade their devices to be able to use the new technology.
Apple’s shares, which have jumped nearly 20% this year, rose to $236.30, giving the company a market value of $3.62 trillion, the highest in the world.
“Apple Intelligence is a clear catalyst to boost iPhone and iPad shipments,” Morgan Stanley analysts said.
The new technology is compatible with only 8% of iPhone and iPad devices and Apple has 1.3 billion units of smartphones currently in use by customers, the analysts said, adding that the company could sell nearly 500 million iPhones over the next two years.
Morgan Stanley, which previously expected Apple to sell between 230 million and 235 million iPhones annually over the next two years, raised its price target on the company’s shares to $273 from $216.
The stock has an average rating of “buy” with a median price target of $217, and has outperformed the S&P 500 index (.SPX), opens new tab this year, according to LSEG data.
Industry analysts expect Samsung (005930.KS), opens new tab and Apple to lead the charge in global smartphone market recovery this year given the buzz around GenAI-enabled smartphones.
The artificial intelligence boom has had such a profound effect on big tech companies that their energy consumption, and with it their carbon emissions, have surged.
The spectacular success of large language models such as ChatGPT has helped fuel this growth in energy demand. At 2.9 watt-hours per ChatGPT request, AI queries require about 10 times the electricity of traditional Google queries, according to the Electric Power Research Institute, a nonprofit research firm. Emerging AI capabilities such as audio and video generation are likely to add to this energy demand.
The energy needs of AI are shifting the calculus of energy companies. They’re now exploring previously untenable options, such as restarting a nuclear reactor at the Three Mile Island power plant that has been dormant since the infamous disaster in 1979.
Data centers have had continuous growth for decades, but the magnitude of growth in the still-young era of large language models has been exceptional. AI requires a lot more computational and data storage resources than the pre-AI rate of data center growth could provide.
AI and the grid
Thanks to AI, the electrical grid – in many places already near its capacity or prone to stability challenges – is experiencing more pressure than before. There is also a substantial lag between computing growth and grid growth. Data centers take one to two years to build, while adding new power to the grid requires over four years.
As a recent report from the Electric Power Research Institute lays out, just 15 states contain 80% of the data centers in the U.S.. Some states – such as Virginia, home to Data Center Alley – astonishingly have over 25% of their electricity consumed by data centers. There are similar trends of clustered data center growth in other parts of the world. For example, Ireland has become a data center nation.
Along with the need to add more power generation to sustain this growth, nearly all countries have decarbonization goals. This means they are striving to integrate more renewable energy sources into the grid. Renewables such as wind and solar are intermittent: The wind doesn’t always blow and the sun doesn’t always shine. The dearth of cheap, green and scalable energy storage means the grid faces an even bigger problem matching supply with demand.
Additional challenges to data center growth include increasing use of water cooling for efficiency, which strains limited fresh water sources. As a result, some communities are pushing back against new data center investments.
Better tech
There are several ways the industry is addressing this energy crisis. First, computing hardware has gotten substantially more energy efficient over the years in terms of the operations executed per watt consumed. Data centers’ power use efficiency, a metric that shows the ratio of power consumed for computing versus for cooling and other infrastructure, has been reduced to 1.5 on average, and even to an impressive 1.2 in advanced facilities. New data centers have more efficient cooling by using water cooling and external cool air when it’s available.
Unfortunately, efficiency alone is not going to solve the sustainability problem. In fact, Jevons paradox points to how efficiency may result in an increase of energy consumption in the longer run. In addition, hardware efficiency gains have slowed down substantially, as the industry has hit the limits of chip technology scaling.
Google parent Alphabet is in advanced negotiations to acquire cybersecurity startup Wiz for approximately $23 billion. This deal would mark Alphabet’s largest acquisition to date, amidst increasing regulatory scrutiny.
Alphabet, the parent company of Google, is close to buying cybersecurity startup Wiz for about $23 billion. If this happens, it will be Alphabet’s biggest purchase ever. Most of the payment will be in cash, and the deal might be finalized soon, according to a report by Reuters.
Wiz started in Israel and is now based in New York. It’s a fast-growing company that offers cloud-based cybersecurity services. These services use artificial intelligence to detect and respond to threats in real-time. In 2023, Wiz made around $350 million in revenue and works with 40 per cent of Fortune 100 companies.
This potential deal is notable because the U.S. government, under President Joe Biden, is more carefully watching big tech companies to prevent them from getting too powerful through acquisitions. However, Alphabet is still moving forward with its plans.
Wiz partners with big cloud service providers like Microsoft and Amazon. Its clients include major companies like Morgan Stanley and DocuSign. Wiz has 900 employees across the US, Europe, Asia, and Israel, and plans to hire 400 more people in 2024.
OpenAI whistleblowers have filed a complaint with the U.S. Securities and Exchange Commission, calling for an investigation over the artificial intelligence company’s allegedly restrictive non-disclosure agreements, according to a letter seen by Reuters.
“Given the well-documented potential risks posed by the irresponsible deployment of AI, we urge the Commissioners to immediately approve an investigation into OpenAI’s prior NDAs, and to review current efforts apparently being undertaken by the company to ensure full compliance with SEC rules,” according to the letter, which was provided to Reuters by the office of Sen. Chuck Grassley.
The AI company allegedly made employees sign agreements that required them to waive their federal rights to whistleblower compensation, according to the letter.
The whistleblowers requested the SEC to fine OpenAI for each improper agreement made to the extent the agency deemed appropriate.
An SEC spokesperson said in an emailed statement that it does not comment on the existence or nonexistence of a possible whistleblower submission.
OpenAI did not immediately respond to requests for a comment on the letter.
“ Artificial intelligence is rapidly and dramatically altering the landscape of technology as we know it,” said Sen. Grassley, whose office said the letter was provided by the whistleblowers. He added that “OpenAI’s policies and practices appear to cast a chilling effect on whistleblowers’ right to speak up and receive due compensation for their protected disclosures.”
Nasa on Friday unveiled a set of images of two galaxies merging in sort of a “cosmic ballet“. The galaxies, affectionately named the Penguin and the Egg, are located 326 million light-years away in the constellation Hydra.
The release of these images marked the second anniversary of the telescope’s first scientific results.
The image was taken by the James Webb Space Telescope which was launched in 2021 and began collecting the data since that year.
Since becoming operational, Webb has observed galaxies teeming with stars that formed within a few hundred million years of the Big Bang event that marked the beginning of the universe about 13.8 billion years ago.
“We see two galaxies, each a collection of billions of stars. The galaxies are in the process of merging. That’s a common way that galaxies like our own build up over time, to grow from small galaxies – like those that Webb has found shortly after the Big Bang – into mature galaxies like our own Milky Way,” Reuters quoted Jane Rigby, Nasa Webb senior project scientist, as saying.
The Penguin and Egg galaxies, collectively known as Arp 142, are shown in the images connected by a haze of stars and gas as they slowly merge. The Penguin galaxy, formally called NGC 2936, is a distorted spiral galaxy, while the Egg galaxy, NGC 2937, is a compact elliptical galaxy. Their interaction began between 25 and 75 million years ago, and they are expected to become a single galaxy in hundreds of millions of years.
Webb has made significant contributions to the understanding of the universe, detecting the earliest known galaxies and providing insights into exoplanet composition and star-forming regions.
ChatGPT maker OpenAI is working on a novel approach to its artificial intelligence models in a project code-named “Strawberry,” according to a person familiar with the matter and internal documentation reviewed by Reuters.
The project, details of which have not been previously reported, comes as the Microsoft-backed startup races to show that the types of models it offers are capable of delivering advanced reasoning capabilities.
Teams inside OpenAI are working on Strawberry, according to a copy of a recent internal OpenAI document seen by Reuters in May. Reuters could not ascertain the precise date of the document, which details a plan for how OpenAI intends to use Strawberry to perform research. The source described the plan to Reuters as a work in progress. The news agency could not establish how close Strawberry is to being publicly available.
How Strawberry works is a tightly kept secret even within OpenAI, the person said.
The document describes a project that uses Strawberry models with the aim of enabling the company’s AI to not just generate answers to queries but to plan ahead enough to navigate the internet autonomously and reliably to perform what OpenAI terms “deep research,” according to the source.
This is something that has eluded AI models to date, according to interviews with more than a dozen AI researchers.
Asked about Strawberry and the details reported in this story, an OpenAI company spokesperson said in a statement: “We want our AI models to see and understand the world more like we do. Continuous research into new AI capabilities is a common practice in the industry, with a shared belief that these systems will improve in reasoning over time.”
The spokesperson did not directly address questions about Strawberry.
The Strawberry project was formerly known as Q*, which Reuters reported last year was already seen inside the company as a breakthrough.
Two sources described viewing earlier this year what OpenAI staffers told them were Q* demos, capable of answering tricky science and math questions out of reach of today’s commercially-available models.
On Tuesday at an internal all-hands meeting, OpenAI showed a demo of a research project that it claimed had new human-like reasoning skills, according to Bloomberg, opens new tab. An OpenAI spokesperson confirmed the meeting but declined to give details of the contents. Reuters could not determine if the project demonstrated was Strawberry.
OpenAI hopes the innovation will improve its AI models’ reasoning capabilities dramatically, the person familiar with it said, adding that Strawberry involves a specialized way of processing an AI model after it has been pre-trained on very large datasets.
Researchers Reuters interviewed say that reasoning is key to AI achieving human or super-human-level intelligence.
While large language models can already summarize dense texts and compose elegant prose far more quickly than any human, the technology often falls short on common sense problems whose solutions seem intuitive to people, like recognizing logical fallacies and playing tic-tac-toe. When the model encounters these kinds of problems, it often “hallucinates” bogus information.
AI researchers interviewed by Reuters generally agree that reasoning, in the context of AI, involves the formation of a model that enables AI to plan ahead, reflect how the physical world functions, and work through challenging multi-step problems reliably.
Improving reasoning in AI models is seen as the key to unlocking the ability for the models to do everything from making major scientific discoveries to planning and building new software applications.
OpenAI CEO Sam Altman said earlier this year, opens new tab that in AI “the most important areas of progress will be around reasoning ability.”
Other companies like Google, Meta and Microsoft are likewise experimenting with different techniques to improve reasoning in AI models, as are most academic labs that perform AI research. Researchers differ, however, on whether large language models (LLMs) are capable of incorporating ideas and long-term planning into how they do prediction. For instance, one of the pioneers of modern AI, Yann LeCun, who works at Meta, has frequently said that LLMs are not capable of humanlike reasoning.
AI CHALLENGES
Strawberry is a key component of OpenAI’s plan to overcome those challenges, the source familiar with the matter said. The document seen by Reuters described what Strawberry aims to enable, but not how.
In recent months, the company has privately been signaling to developers and other outside parties that it is on the cusp of releasing technology with significantly more advanced reasoning capabilities, according to four people who have heard the company’s pitches. They declined to be identified because they are not authorized to speak about private matters.
Strawberry includes a specialized way of what is known as “post-training” OpenAI’s generative AI models, or adapting the base models to hone their performance in specific ways after they have already been “trained” on reams of generalized data, one of the sources said.
The post-training phase of developing a model involves methods like “fine-tuning,” a process used on nearly all language models today that comes in many flavors, such as having humans give feedback to the model based on its responses and feeding it examples of good and bad answers.
Picture this: a security robot that’s always on duty, with eyes that never close and a focus that never wavers.
That’s Athena for you.
Developed by the folks at Kody Robots, Athena is like the superhero of surveillance robots.
It’s got the brains of the most advanced artificial intelligence and can adapt to any security challenge thrown its way. Whether it’s day or night, rain or shine, Athena’s on the job making sure everything’s safe and sound.
Athena’s technical specs
Athena is equipped with four 18x zoom cameras, providing extensive surveillance capabilities. Its thermal imaging technology can detect temperatures ranging from 50 to 1022 ℉ (wide angle), ensuring that Athena can accurately identify heat signatures.
In terms of physical dimensions, Athena is approximately 4 feet 7 inches tall and 1 foot 7 inches deep. It weighs 121 pounds, which balances stability with mobility.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
Athena features a four-microphone array for clear audio capture and a 9-watt speaker for effective communication.
Athena is powered by a robust lithium-ion battery with a 25.6 volts / 20 ampere-hours capacity. The automatic charging feature ensures that Athena is always charged and ready for continuous operation.
The eyes that never sleep
Athena’s most striking feature is its 360° Ultra HD recording capability. With eye-level cameras providing comprehensive coverage, this robotic guard captures every detail with exceptional clarity. This constant, high-quality surveillance significantly enhances security in various environments, from corporate campuses to healthcare facilities.
AI-driven threat detection
What sets Athena apart is its AI-based people detection technology. This smart system minimizes false alarms, allowing security teams to focus on real threats. This feature’s precision makes Athena an ideal choice for high-stakes environments where accuracy is crucial.
More than just a camera
Athena goes beyond simple surveillance. Equipped with a two-way intercom system, it serves as a communication hub in emergencies. This feature allows security personnel to interact with individuals in the monitored area, enhancing situational awareness and response capabilities.
Adaptability across industries
One of Athena’s strengths is its versatility. Whether patrolling a shopping mall, safeguarding a warehouse or monitoring a public event, Athena adapts to diverse environments. Its presence not only deters potential security breaches but also gives businesses a competitive edge in safety measures.
Kurt’s key takeaways
As security challenges evolve, so must our solutions. Athena represents a significant leap forward in surveillance technology. Combining AI, high-definition imaging and adaptable features offers a level of security that traditional methods struggle to match. As we move towards a future where smart technology plays an increasingly crucial role in safety, Athena stands as a testament to the potential of AI-driven security solutions. Whether in corporate, public or private settings, this tireless guardian is set to redefine what we expect from security systems, promising a safer, more efficiently monitored world.
Astronauts on spacewalks famously have to relieve themselves inside their spacesuits. Not only is this uncomfortable for the wearer and unhygienic, it is also wasteful, as – unlike wastewater on board the International Space Station (ISS) – the water in urine from spacewalks is not recycled.
A solution for these challenges would be full-body ‘stillsuits’ like those in the blockbuster Dune franchise, which absorbed and purified water lost through sweating and urination, and recycled it into drinkable water. Now, this sci-fi is about to become reality, with a prototype novel urine collection and filtration system for spacesuits. The design, by researchers from Cornell University, is published in Frontiers in Space Technologies.
“The design includes a vacuum-based external catheter leading to a combined forward-reverse osmosis unit, providing a continuous supply of potable water with multiple safety mechanisms to ensure astronaut wellbeing,” said Sofia Etlin, a research staff member at Weill Cornell Medicine and Cornell University, and the study’s first author.
In 2025 and 2026, NASA is planning for the Artemis II and III missions, where a crew will orbit the Moon and land on its south pole, respectively. These missions are expected to be followed by crewed missions to Mars by the early 2030s. However, astronauts have long complained about a lack of comfort and hygiene of the existing maximum absorbency garment (MAG) – the waste management system of traditional NASA spacesuits –in use since the late 1970s – which functions like a multi-layered adult diaper made of superabsorbent polymer.
“The MAG has reportedly leaked and caused health issues such as urinary tract infections and gastrointestinal distress. Additionally, astronauts currently have only one liter of water available in their in-suit drink bags. This is insufficient for the planned, longer-lasting lunar spacewalks, which can last ten hours, and even up to 24 hours in an emergency,” said Etlin.
Astronauts have also requested that the time needed to fill and de-gas the in-suit drink bags be reduced in future spacesuits, and that a separate supply of non-caffeinated high-energy drink be added.
With all these objectives in mind, Etlin and colleagues have now designed a urine collection device, including an undergarment made of multiple layers of flexible fabric. This connects to a collection cup (with a different shape and size for women and men) of molded silicone, to fit around the genitalia.
With all these objectives in mind, Etlin and colleagues have now designed a urine collection device, including an undergarment made of multiple layers of flexible fabric. This connects to a collection cup (with a different shape and size for women and men) of molded silicone, to fit around the genitalia.
As the rest of the tech industry seems to mostly shift to overproduced infomercials for their product launches, Samsung is holding fast to its love for giant live events in huge arenas. This year, at Unpacked in Paris, the company announced a whole lineup of new gadgets. The new Fold and Flip look nice but also a bit uninspired; the Watch Ultra and Buds 3 look almost too familiar; and the Galaxy Ring might be the beginning of something really cool.
On this episode of The Vergecast, we talk through all of Samsung’s announcements and try to figure out whether “Apple products but for Android” is actually a winning strategy. It might be! Plus, we debate what to make of Samsung’s somewhat lackluster upgrades for the Flip and Fold phones — maybe these just aren’t the smartphone shapes of the future. Or at least not yet.
After that, we talk about a weird week in the streaming biz, from the maybe-finally-really-happening Paramount / Skydance deal to the looming end of Redbox to Instagram’s somewhat surprising plan to not try and do longform video.
Finally, in the lightning round, we talk Nothing’s awesomely cheap new phone, the latest in the AI copyright lawsuit world, and the sad current state of TUAW.
If you want to know more about everything we talk about in this episode, here are some links to get you started, first on Samsung:
Samsung Galaxy Unpacked: all the news on the Galaxy Ring, Fold, Flip, Watch, and AI
Samsung’s Galaxy Z Fold 6 and Flip 6 are pricier with minor updates
Samsung’s Galaxy Ring could be the one ring to rule an ecosystem
Samsung Galaxy Watch Ultra hands-on: ultra déjà vu
Samsung’s new Galaxy Buds are blatant AirPod clones in both form and function
Samsung, Google, and Qualcomm are, uh, still doing that XR thing.
Motorola’s 2024 Razr Plus is a fun and flawed flip phone
A large clinical trial in South Africa and Uganda has shown that a twice-yearly injection of a new pre-exposure prophylaxis drug gives young women total protection from HIV infection.
The trial tested whether the six-month injection of lenacapavir would provide better protection against HIV infection than two other drugs, both daily pills. All three medications are pre-exposure prophylaxis (or PrEP) drugs.
Physician-scientist Linda-Gail Bekker, principal investigator for the South African part of the study, tells Nadine Dreyer what makes this breakthrough so significant and what to expect next.
Tell us about the trial and what it set out to achieve
The Purpose 1 trial with 5,000 participants took place at three sites in Uganda and 25 sites in South Africa to test the efficacy of lenacapavir and two other drugs.
Lenacapavir (Len LA) is a fusion capsid inhibitor. It interferes with the HIV capsid, a protein shell that protects HIV’s genetic material and enzymes needed for replication. It is administered just under the skin once every six months.
The randomized controlled trial, sponsored by the drug developers Gilead Sciences, tested several things.
The first was whether a six-monthly injection of lenacapavir was safe and would provide better protection against HIV infection as PrEP for women between the ages of 16 and 25 years than Truvada F/TDF, a daily PrEP pill in wide use that has been available for more than a decade.
Secondly, the trial also tested whether Descovy F/TAF, a newer daily pill, was as effective as F/TDF. The newer F/TAF has superior pharmacokinetic properties to F/TDF. Pharmacokinetic refers to the movement of a drug into, through, and out of the body. F/TAF is a smaller pill and is in use among men and transgender women in high-income countries.
The trial had three arms. Young women were randomly assigned to one of the arms in a 2:2:1 ratio (Len LA: F/TAF oral: F/TDF oral) in a double-blinded fashion. This means neither the participants nor the researchers knew which treatment participants were receiving until the clinical trial was over.
In eastern and southern Africa, young women are the population who bear the brunt of new HIV infections. They also find a daily PrEP regimen challenging to maintain for a number of social and structural reasons.
During the randomized phase of the trial, none of the 2,134 women who received lenacapavir contracted HIV. There was 100 percent efficiency.
By comparison, 16 of the 1,068 women (or 1.5%) who took Truvada (F/TDF) and 39 of 2,136 (1.8%) who received Descovy (F/TAF) contracted the HIV virus.
The results at a recent independent data safety monitoring board review led to the recommendation that the trial’s “blinded” phase should be stopped and all participants should be offered a choice of PrEP.
This board is an independent committee of experts who are put in place at the start of a clinical trial. They see the unblinded data at stipulated times during the trial to monitor progress and safety. They ensure that a trial does not continue if there is harm or a clear benefit in one arm over others.
What is the significance of these trials?
This breakthrough gives great hope that we have a proven, highly effective
prevention tool to protect people from HIV.
There were 1.3 million new HIV infections globally in the past year. Although that’s fewer than the 2 million infections seen in 2010, it is clear that at this rate, we are not going to meet the HIV new infection target that UNAIDS set for 2025 (fewer than 500,000 globally) or potentially even the goal to end Aids by 2030.
PrEP is not the only prevention tool.
PrEP should be provided alongside HIV self-testing, access to condoms, screening and treatment for sexually transmitted infections, and access to contraception for women of childbearing potential.
In addition, young men should be offered medical male circumcision for health reasons.
But despite these options, we haven’t quite got to the point where we have been able to stop new infections, particularly among young people.
For young people, the daily decision to take a pill or use a condom, or take a pill at the time of sexual intercourse can be very challenging.
HIV scientists and activists hope that young people may find that having to make this “prevention decision” only twice a year may reduce unpredictability and barriers.
For a young woman who struggles to get to an appointment at a clinic in a town or who can’t keep pills without facing stigma or violence, an injection just twice a year is the option that could keep her free of HIV.
What happens now?
The plan is that the Purpose 1 trial will go on but now in an “open-label” phase. This means that study participants will be “unblinded”: they will be told whether they have been in the “injectable” or oral TDF or oral TAF groups.
They will be offered the choice of PrEP they would prefer as the trial continues.
A sister trial is also underway: Purpose 2 is being conducted in a number of regions, including some sites in Africa among cisgender men and transgender and nonbinary people who have sex with men.
It’s important to conduct trials among different groups because we have seen differences in effectiveness. Whether the sex is anal or vaginal is important and may have an impact on effectiveness.
How long until the drug is rolled out?
We have read in a Gilead Sciences press statement that within the next couple of months the company will submit the dossier with all the results to a number of country regulators, particularly the Ugandan and South African regulators.
In late April, farmers in Saskatchewan stumbled upon spacecraft fragments while preparing their fields for seeding. It sounds like the beginning of a science fiction movie, but this really happened, sending a powerful warning: it is only a matter of time before someone is seriously hurt or killed by falling space junk.
The Axiom Space private astronaut mission (Ax-3) concluded safely on Feb. 9 when its SpaceX Crew Dragon capsule splashed down off the coast of Florida. Several weeks later, the Crew Dragon’s cargo trunk re-entered the atmosphere over Canada after being abandoned in orbit prior to the capsule’s return.
Several incidents
The Federal Aviation Administration, charged with approving commercial spaceflight launches in the United States, has claimed that such trunks typically “burn up” during their re-entry.
This is clearly incorrect. Similar fragments, likely from the trunk of a different Crew Dragon mission, were found in North Carolina in May, including a smaller piece that landed on the roof of a house.
Trunk fragments were even found from the first operational crewed Dragon mission (Crew-1), with those pieces strewn over fields in New South Wales, Australia. It is becoming evident that deadly debris falls to the ground every time a Crew Dragon trunk re-enters, with pieces being found whenever this occurs over an accessible area.
These are not small pieces, with some approaching the size of ping-pong tables and weighing 100 pounds. They could easily cause a fatality or substantial damage.
Crew Dragon trunks are only one part of a much larger problem.
A matter of luck
Private or governmental, American or Chinese, organizations involved with space launches regularly allow objects like rocket bodies and satellites to re-enter uncontrollably, under the false premise that they will either burn up or fall into the ocean.
Indeed, NASA allowed an old battery pallet to be released from the International Space Station, knowing it would re-enter uncontrollably. NASA said it should burn up completely, which was proven wrong in March when a potentially lethal fragment crashed through the roof, then ceiling, and then floor of a house in Florida.
So far, no one is known to have been hurt by falling space junk, but that’s just a matter of luck; people are finding more and more pieces in or near inhabited areas worldwide.
Whose responsibility?
The 1972 Liability Convention makes countries absolutely liable for damage, including loss of life, caused by its space objects falling onto the surface of the Earth or striking airplanes in flight. And the 1967 Outer Space Treaty makes countries responsible for all their space actors, including private companies.
Yet the Liability Convention is an agreement between countries, which makes the interactions between private citizens — like Saskatchewan farmers — and powerful space companies — like SpaceX — less straightforward. In the absence of governmental action, individuals may need to resort to lawsuits.
As for the Crew Dragon trunk scattered across Saskatchewan, in June, SpaceX sent two employees in a rented U-Haul truck to pick up the pieces, reportedly paying farmers for the fragments. Had there been a death, or damage to million-dollar farm equipment, the outcome would have been much more complicated.
Sunita Williams and Butch Wilmore blasted off on June 5 aboard the brand new Boeing Starliner spaceship and docked the following day for what was meant to be roughly a week-long stay.
A pair of US astronauts stuck waiting to leave the International Space Station said Wednesday they were confident that the problem-plagued Boeing Starliner they rode up on would soon bring them home, even as significant uncertainties remain.
Butch Wilmore and Sunita Williams blasted off on June 5 aboard the brand new spaceship that NASA is hoping to certify to ferry crews to-and-from the orbital outpost.
They docked the following day for what was meant to be roughly a week-long stay, but their return was pushed back because of thruster malfunctions and helium leaks that came to light during the journey.
No date has been set for the return, but NASA officials said Wednesday they were eying “late July.”
Asked during a live press call from the station whether they still had faith in the Starliner team and the spaceship, mission commander Wilmore replied: “We’re absolutely confident.”
“I have a real good feeling in my heart that the spacecraft will bring us home, no problem,” added Sunita Williams.
She said they were continuing to enjoy their time aboard the ISS, performing tasks like changing out the pump on a machine that processes urine back into drinking water, and carrying out science experiments such as gene sequencing in the microgravity environment.
They have also tested Starliner as a “safe haven” vehicle in case of problems aboard the ISS and checked out how its life support performs when four people are inside.
Lingering uncertainty
Before Wilmore and Williams can come home, however, engineering teams need to run more simulations of similar thrusters and helium seals on the ground, to better understand the root causes of some of the technical issues Starliner experienced — and modify the way it will fly down, if necessary.
It was known there was one helium leak affecting the spaceship before the launch, but more leaks emerged during the flight. Helium, while non-combustible, provides pressure to the propulsion system.
What’s more, some of Starliner’s thrusters that provide fine maneuvering initially failed to kick in during its approach to the station, delaying docking.
Engineers are not sure why the craft’s computer “deselected” these thrusters, though they were able to restart all but one of them.
In a subsequent press call, Boeing executive Mark Nappi told reporters that the “working theory” for the thruster malfunction was overheating due to excessive firing.
Theories on the cause of the helium leaks ranged from debris entering the propulsion system to Boeing possibly installing seals that were undersized for the task.
NASA and Boeing insist Starliner could fly home in case of an emergency, particularly since the problems affected only certain thrusters that control orientation.
The £400 ring, which is scheduled to launch later this month, will come with a battery life of up to seven days and is designed to be worn 24 hours a day to help users monitor their health stats during the day, but also while they sleep.
Samsung has become the first tech giant to release a smart ring – which can track sleep, movement, periods and heart rate.
The South Korean company released the Galaxy Ring on Tuesday as part of its latest wearable technology – also announced was the new Galaxy Watch Ultra.
It said the £400 ring, which is scheduled to launch later this month, will come with a battery life of up to seven days and is designed to be worn 24 hours a day to help users monitor their health stats during the day, but also while they sleep.
Coming in three colours – gold, silver and black – and nine sizes, the ring uses artificial technology (AI) to analyse biometric data collected from the person wearing the device and connects to the Samsung health app.
It then has the ability to assess an individual’s well-being and deliver an “energy score” that will range from one to 100 and make recommendations like a virtual fitness coach.
Away from fitness, Samsung says users can also take photos or snooze alarm clocks by pinching their fingers.
Although smart rings are nothing new – members of the England squad were spotted wearing a tracking ring, from Finnish health technology company Oura, during training for the Euro 2024 last month – Samsung is the first out of larger companies like Apple and Google to release the technology.
The decision to do so has been called an “interesting bet” by industry expert, Ben Wood, chief analyst at CCS Insight.
Mr Wood said the smart ring has a strong selling point because many people do not wear smartwatches to bed so are missing out on potentially useful sleep data, but the “huge complexities” around the tech still makes it a risky launch.
Scientists say elusive object found just 18,000 light years from Earth at the center of Omega Centauri
Astronomers have found compelling evidence for a long-sought intermediate-mass black hole lurking at the heart of Omega Centauri, the largest and brightest globular cluster visible from Earth. This cosmic behemoth, weighing in at over 20,000 times the mass of our Sun, bridges the gap between stellar-mass black holes formed from collapsed stars and the supermassive monsters found in galactic centers.
The key to uncovering this elusive object? A group of stars behaving in a most peculiar way.
Using over two decades of ultra-precise observations from the Hubble Space Telescope, researchers detected seven stars near the cluster’s core zipping along at breakneck speeds – far faster than should be possible if only the combined gravity of the cluster’s normal stars was at play. These stellar speedsters are the smoking gun pointing to a massive, invisible object exerting its gravitational influence.
The discovery is akin to finding a group of cars racing around an empty lot at 200 mph. You’d immediately suspect there must be some kind of racetrack or hidden structure guiding their motion. In this case, that hidden structure is the gravity of an intermediate-mass black hole (IMBH). Study authors consider IMBHs to be the “missing link” in black hole formation.
‘Needle in a haystack’
Omega Centauri has long been a tempting target in the search for these middleweight black holes. As the remnant core of a dwarf galaxy consumed by the Milky Way long ago, it seemed a prime candidate to host such an object. Previous studies hinted at its presence, but debates raged over alternative explanations and the lack of definitively fast-moving stars – until now.
“Previous studies had prompted critical questions of ‘So where are the high-speed stars?’ We now have an answer to that and the confirmation that Omega Centauri contains an intermediate-mass black hole,” explains Nadine Neumayer, a group leader at the Max Planck Institute for Astronomy, in a statement. “At a distance of about 18,000 light-years, this is the closest known example of a massive black hole.”
The discovery of these stellar speed demons provides the strongest evidence yet for an intermediate-mass black hole in Omega Centauri. By carefully analyzing the stars’ motions, researchers calculated that the hidden object must weigh at least 8,200 times the mass of the Sun, with their best estimate placing it around 20,000 solar masses.
This places the black hole squarely in the intermediate-mass category – far more massive than those left behind by exploding stars, yet not nearly as gargantuan as the supermassive black holes powering galactic cores. These rare cosmic middleweights are crucial missing links in our understanding of how the universe’s largest black holes grew over time.
Artificial intelligence can not only pass college exams but often outperform human students, all while remaining virtually undetected. This eye-opening research, conducted at the University of Reading, serves as a “wake-up call” for the education sector, highlighting the urgent need to address the challenges posed by AI in academic settings.
The study, published in the journal PLOS ONE, aptly described as a real-world “Turing test,” involved secretly submitting AI-generated exam answers alongside those of real students across five undergraduate psychology modules. The results were nothing short of astonishing. A staggering 94% of the AI-written submissions went undetected by examiners, despite being entirely produced by an AI system without any human modification.
But the surprises didn’t end there. Not only did the AI submissions fly under the radar, but they also consistently outperformed their human counterparts. On average, the AI-generated answers scored half a grade boundary higher than those of real students. In some cases, the AI advantage approached a full grade boundary, with AI submissions achieving first-class honors while human students lagged behind.
This revelation raises profound questions about the future of education and assessment. As AI technologies like ChatGPT become increasingly sophisticated and accessible, how can universities ensure the integrity of their exams and the value of their degrees? The study’s findings suggest that current methods of detecting AI-generated content are woefully inadequate, leaving educational institutions vulnerable to a new form of high-tech cheating.
“Many institutions have moved away from traditional exams to make assessment more inclusive. Our research shows it is of international importance to understand how AI will affect the integrity of educational assessments,” says co-author Peter Scarfe, an associate professor at Reading’s School of Psychology and Clinical Language Sciences, in a statement.“We won’t necessarily go back fu
lly to hand-written exams, but global education sector will need to evolve in the face of AI,” he continues. “It is testament to the candid academic rigour and commitment to research integrity at Reading that we have turned the microscope on ourselves to lead in this.”
The implications of this research extend far beyond the halls of academia. In a world where AI can seamlessly mimic human intelligence in written assessments, employers may find themselves questioning the reliability of academic credentials. Moreover, the very nature of learning and knowledge acquisition could be called into question. If AI can ace exams without understanding or retaining information, what does this mean for the future of education and professional development?
The researchers behind the study emphasize that their findings should not be seen as an indictment of current educational practices but rather as a call to action. They argue that the education sector must adapt to this new reality, finding ways to harness the power of AI while maintaining the integrity and value of human learning.
“As a sector, we need to agree how we expect students to use and acknowledge the role of AI in their work. The same is true of the wider use of AI in other areas of life to prevent a crisis of trust across society,” adds co-author Etienne Roesch, a professor in Reading’s School of Psychology and Clinical Language Sciences. “Our study highlights the responsibility we have as producers and consumers of information. We need to double down on our commitment to academic and research integrity.”
As we grapple with these challenges, one thing is clear: the landscape of education is changing rapidly, and institutions must evolve to keep pace. The ability to detect and manage AI use in academic settings will likely become a crucial skill for educators and administrators alike. At the same time, there may be opportunities to integrate AI into the learning process in ways that enhance rather than undermine human education.
This alarming study serves as both a warning and an invitation – a chance to reimagine education for the AI age. As we move forward, the goal will be to find a balance that embraces technological advancement while preserving the unique value of human intelligence and creativity.
iOS 18’s third developer beta includes a “dynamic” version of the default iOS 18 wallpaper that shifts colors, as reported by 9to5Mac. Previous betas had four color options (and dark mode counterparts) for the wallpaper, but this new dynamic option changes between colors over time.
Check out this video from leaker ShrimpApplePro to see the wallpaper and some of the color shifts. I think it looks pretty good!
The default iOS 18 wallpaper now has a dynamic option
Changing according to time of the day ig pic.twitter.com/ejCtVVG4lQ
— ShrimpApplePro 🍤 ずっと真夜中でいいのに (@VNchocoTaco) July 8, 2024
NASA has alerted that a massive aeroplan-esized asteroid is all set to make its closest approach towards Earth. Know its details.
Why do scientists keep a constant eye on asteroids? The answer is simple, the catastrophic impact that an asteroid strike can have on Earth is heart-wrenching. Believing the theories, the massive asteroid that hit the Earth around 66 million years ago, which vanished dinosaurs from Earth, had a devasted aftermath. In order to prepare for any upcoming asteroid threat, Nasa and other space agencies keep an eye on these space rocks.
Now, a massive, aeroplane-sized asteroid is moving towards Earth tomorrow and will make its close approach on July 9th at 14:54 UTC (8:24 PM IST). NASA revealed that it is dubbed Asteroid 2024 NB1, which measures 200-foot (61 meters) in size. It will be just 3.48 million miles away at its closest point to Earth. Also, Asteroid 2024 NB1 is travelling at a fiery speed of 35010 KM per hour.
This asteroid is part of the Aten group, known for its orbits that frequently intersect Earth’s. The group is named after the first discovered asteroid, 2062 Aten. Is it a threat to Earth?
Is Asteroid 2024 NB1 Potentially Hazardous?
According to the space agency, an asteroid is considered potentially hazardous if it comes within 4.6 million miles of Earth and is larger than about 150 meters. NASA’s Jet Propulsion Laboratory (JPL) has provided details about upcoming asteroids and alerted about Asteroid 2024 NB1. Thankfully, due to its size, this asteroid doesn’t meet the criteria of a potentially hazardous asteroid.
Still, it is crucial to keep an eye on this asteroid as a mere deviation from its orbit can send the space rock directly towards Earth. NASA assures that the asteroid will pass by safely with no risk of impact. Despite this reassurance, constant tracking of asteroids remains crucial.
ISRO Chairman S. Somanath is focusing on asteroids to defend Earth from potential catastrophic impacts. He plans to observe Asteroid 99942 Apophis, which will pass close to Earth on April 13, 2029. India may collaborate with JAXA, ESA, and NASA on this mission. Somanath emphasized the need for planetary defense, warning that an asteroid impact could be disastrous. Future plans include asteroid landings to study impact risks and prepare defenses.
Here’s some potentially good news for Apple Watch owners who have Apple Watch Ultra screen envy: the Apple Watch Series 10 — may get the same size screen as the company’s 49mm outdoorsy watch. That’s in addition to other possible improvements, like a thinner case and a new chip that could “lay the groundwork for some AI enhancements down the road,” according to Mark Gurman’s latest Power On newsletter for Bloomberg.
The screen rumor seems to back up a CAD render from last month that showed a Series 10 watch with a two-inch display. Gurman says — and that render appears to show — that the watch won’t otherwise feature any major design changes. If that means no magnetic watch strap attachments or whatever, I’m putting that part of the rumor in the “good” column — like many, I’ve accumulated a number of watch straps over the years, and I’d like to keep using them if I decide to upgrade.
As for new sensors, that seems cloudier, as Gurman says Apple is struggling with two big health sensor updates it’s been planning. The company reportedly hasn’t been able to get its rumored blood pressure monitor’s reliability up to snuff, and he writes that not being able to use its banned blood oxygen sensor is hampering its efforts to add sleep apnea detection.
Apple is reportedly planning a cheaper version of the Apple Watch SE to bring pricing in line with Samsung’s $199 Galaxy Watch FE. One way it might do this, apparently, is with a rigid plastic case. That could make the watch cheaper, sure, and perhaps lighter — I can think of at least one other Apple product that could stand to benefit from a similar decision! (The Vision Pro. I’m talking about the Vision Pro.)
Tech prophet who predicted the iPhone years in advance makes alarming forecasts for coming years
* After 2029, humans will begin to merge with machines
• Dead people will come back first as simulations – then as ‘printed’ living bodies
A tech expert with a track record of predicting sea changes in the industry has made several eye-popping new forecasts in a new book.
Google’s Ray Kurzweil famously predicted the iPhone era and the fact that a computer would beat someone at chess by 1998.
In his new book, ‘The Singularity is Nearer’, Kurzweil predicts that humans fully merge with AI, becoming immortal cyborgs, by 2045.
He also predicts that advancements in AI will make it possible to resurrect loved ones and connect our brains to cloud technology, in what he calls the ‘fifth epoch’ of human intelligence.
The singularity is the idea that artificial intelligence (AI) will eventually surpass human intelligence, fundamentally changing human existence.
Kurzweil writes: ‘Babies born today will be just graduating college when the Singularity happens.
‘Eventually nanotechnology will enable these trends to culminate in directly expanding our brains with layers of virtual neurons in the cloud.
‘In this way we will merge with AI. These are the most exciting years in all of history.’
He says that recent breakthroughs in AI such as ChatGPT show that his 2005 prediction in his first book ‘The Singularity is Near’ was correct, and ‘the trajectory is clear’.
His most shocking predictions are below:
The dead will come back to life
Kurzweil believes that AI technology holds the promise to ‘bring back’ the dead – at first in the form of simulations which replicate a person, then physically back to life.
After 378 days in a 3D-printed Mars-imitation bunker in Texas, four NASA volunteers, Kelly Haston, Anca Selariu, Ross Brockwell and Nathan Jones, have finally been set free. But they may have missed quite a lot.
NASA volunteers who spent over a year in a simulated Mars bunker have completed their mission.
After 378 days in a 3D-printed, Mars-imitation bunker in Texas, Kelly Haston, Anca Selariu, Ross Brockwell and Nathan Jones were set free Saturday around 10pm UK time.
Speaking at a news conference after they were allowed back into the world, Ms Selariu said bringing life to Mars was the “one thing dearest to my heart”.
She said her “beloved friends and family have always been there when I needed them” and she will “always have them in my heart and in my memory wherever I go”.
The volunteers were a part of NASA’s Crew Health and Performance Exploration Analog (CHAPEA) mission, which began on 25 June last year.
Over that time, they simulated Mars mission operations, including “Marswalks”, grew and ate their own vegetables including tomatoes, peppers, and leafy greens, maintained their equipment and lived under realistic Mars circumstances, NASA said.
This included a communication delay with Earth, limited resources and isolation.
The crew is the first of three to undertake such missions at the Johnson Space Center, in Houston, Texas.
The 3D-printed structure, known as the Mars Dune Alpha, has been described as “an isolated 1,700 square foot habitat”.
This marks the end of the first planned programme, attempting to help prepare the US space agency for the real thing.
NASA is still planning for a return to the Moon – which they hope will act as a springboard for Mars exploration.
What they may have missed:
While they had delayed communication with NASA, those taking part in the mission may not have been kept up to date with what has been going on around the world.
Here are some of the events they may have missed since they were locked away:
NASA issues an alert about asteroid 2024 ME1, hurtling through space at an alarming speed.
NASA is tracking a 120-foot (37-metre) wide asteroid named 2024 ME1, which will make a close approach to Earth on July 10, 2024. This airplane-sized space rock is hurtling through space at a staggering speed. Let’s learn more about its speed, closest approach to Earth, time of encounter with our planet, and if it is a potentially hazardous asteroid.
Asteroid 2024 ME1 Details
While 2024 ME1, a part of Amor group, will zip past our planet at a whopping 30,215 kilometres per hour (18,774 miles per hour) on July 10 at 14:51 UTC (8:21 PM IST), it will safely skim by at a comfortable distance of 4.3 million kilometres (2.7 million miles). That’s roughly 11 times the distance between Earth and the Moon!
What are Amor Asteroids?
Amor asteroids, like 2024 ME1, have orbits that bring them close to Earth but not within the planet’s orbit. Their paths are well-monitored by NASA and other space agencies, ensuring any potential threats are identified well in advance. This continuous monitoring allows scientists to predict and track these space rocks with high precision.
Asteroids are remnants from the early solar system, providing valuable information about its formation and history. Though 2024 ME1 poses no threat to Earth, its flyby offers scientists a unique opportunity to study its characteristics and behaviour, enhancing our understanding of these celestial objects.
Public interest in space and potential asteroid impacts has grown in recent years, partly due to the increasing accessibility of space-related news and the dramatic portrayal of such events in movies.
Isro chairman S Somanath while stressing that no one country can individually develop a planetary protection system against asteroids, on Wednesday said India also wishes to, and is qualified to be, part of larger global missions that study asteroids. Speaking at Isro’s first workshop on planetary defence for students at its headquarters in Bengaluru, he emphasised the need for international cooperation in protecting Earth from potential asteroid impacts.
He pointed out that while asteroids pose a potential danger to Earth, they also offer valuable opportunities for scientific exploration. He noted that studying asteroids could provide insights into the formation of the universe and the origins of life on Earth.
Somanath expressed India’s interest in participating in global asteroid research and defence initiatives. He suggested that Isro could contribute to upcoming international missions, such as the one planned to study the asteroid Apophis in 2029. He proposed that India could provide instruments or other support to joint missions led by space agencies like NASA, ESA, and JAXA. Highlighting India’s growing capabilities in space exploration, Somanath cited recent achievements such as the Chandrayaan-3 and the Aditya-L1 solar observatory missions. He specifically mentioned the successful placement of Aditya-L1 in its halo orbit around the Lagrange point L1, demonstrating India’s ability to execute complex space manoeuvres.
He said these accomplishments showcase India’s readiness to take on more challenging missions, including potential asteroid exploration.
The tech giant has set a target of achieving net zero emissions by the end of the decade. But now, with the company rushing into the artificial intelligence race, the tech giant’s admitted reducing energy use from current levels “may be challenging”.
Google has admitted its greenhouse gas emissions have risen 48% over the past five years – largely because of artificial intelligence – scuppering its climate aims.
AI systems require intense levels of computational power, and this has piled pressure on the tech giant’s data centres around the world.
In its latest environmental report, Google went on to warn that reducing these emissions “may be challenging” – especially as it builds new infrastructure.
At the start of this year, the company announced it was investing £788m in the UK to establish a brand-new data centre in direct response to growing demand for AI.
But all of this comes as Google’s self-set target of achieving net zero emissions by the end of the decade looms closer.
There have been growing concerns about the impact AI could have on climate change as adoption continues to grow.
A recent study by the International Energy Agency predicted that the amount of electricity used by data centres could double between 2022 and 2026.
Although Google’s figures reveal most of its data centres in Europe and the Americas receive most of their energy from carbon-free sources, this isn’t always the case.
That’s because sites in the Middle East, Asia and Australia use a much lower proportion of energy from cleaner sources.
Google claims it is “actively working through” the “significant challenges” it faces – and some initiatives rolled out to lower emissions may not be beneficial immediately.
The report added: “While we advanced clean energy on many of the grids where we operate, there are still some hard-to-decarbonise regions like Asia-Pacific where CFE (carbon-free energy) isn’t readily available.
“In addition, we often see longer lead times between initial investments and construction of clean energy projects and the resulting GHG reductions from them.”
Google went on to argue that AI could ultimately help the world reach key climate targets and even improve weather forecasts – a sentiment shared by Microsoft co-founder Bill Gates.
But Lisa Sachs from the Columbia Center on Sustainable Investment, says Google must do more to collaborate with cleaner firms and invest in the electrical grid.
“The reality is that we are far behind what we could already be doing now with the technology that we have, with the resources that we have, in terms of advancing the transition,” she said.
Until Friday, OpenAI’s recently launched ChatGPT macOS app had a potentially worrying security issue: it wasn’t hard to find your chats stored on your computer and read them in plain text. That meant that if a bad actor or malicious app had access to your machine, they could easily read your conversations with ChatGPT and the data contained within them.
As demonstrated by Pedro José Pereira Vieito on Threads, the ease of access meant it was possible to have another app access those files and show you the text of your conversations right after they happened. Pereira Vieito shared the app he made with me, and I used it to make a video showing how the app can read my ChatGPT conversations with the click of a button. I was also able to find the files on my computer and see the text of conversations just by changing the file name.
After The Verge contacted OpenAI about the issue, the company released an update that it says encrypts the chats. “We are aware of this issue and have shipped a new version of the application which encrypts these conversations,” OpenAI spokesperson Taya Christianson says in a statement to The Verge. “We’re committed to providing a helpful user experience while maintaining our high security standards as our technology evolves.”
After downloading the update, Pereira Vieito’s app no longer works for me, and I can’t see my conversations in plain text.
I asked Pereira Vieito how he discovered the original issue. “I was curious about why [OpenAI] opted out of using the app sandbox protections and ended up checking where they stored the app data,” he said. OpenAI only offers the ChatGPT macOS app through its own website, meaning the app doesn’t have to follow Apple’s sandboxing requirements that apply to software distributed via the Mac App Store.
The Chairman of the Indian Space Research Organisation (ISRO), Dr S Somanath, said in an interview that he and the entire nation would be very proud if the agency flies Prime Minister Narendra Modi to space.
“All of us will be very proud. Not only me, the entire nation will be proud if we have that ability to confidently send such a state man to space (sic),” Somanath said in his conversation with NDTV when asked if he would accommodate PM Modi in a mission to space. However, he added that he would wait for the Gaganyaan Mission to reach a stage advanced enough to do that.
“See whenever such a thing has to happen, the Head of the State has to fly to the International Space Station or a space station of that station, it must be on our vehicle and from our land. I think I should say that only. So, I would wait for the Gaganyaan to be ready, to be proven, to be qualified enough to do that,” Somanath told the publication.
The Chairman of the Indian Space Research Organisation (ISRO), Dr S Somanath, said in an interview that he and the entire nation would be very proud if the agency flies Prime Minister Narendra Modi to space.
“All of us will be very proud. Not only me, the entire nation will be proud if we have that ability to confidently send such a state man to space (sic),” Somanath said in his conversation with NDTV when asked if he would accommodate PM Modi in a mission to space. However, he added that he would wait for the Gaganyaan Mission to reach a stage advanced enough to do that.
“See whenever such a thing has to happen, the Head of the State has to fly to the International Space Station or a space station of that station, it must be on our vehicle and from our land. I think I should say that only. So, I would wait for the Gaganyaan to be ready, to be proven, to be qualified enough to do that,” Somanath told the publication.
He also said that he could not consider any candidate, including VIPs, for the mission as of now since he is limited by the “availability of the trained astronauts” who can be potential candidates.
Elaborating on his response to accommodate PM Modi in the mission, the ISRO chief said that while he would he very happy to oblige, “that’s not the point here”—he pointed at the other responsibilities that the PM would have. Somanath also emphasised ISRO’s desire to develop human spaceflight capability first before thinking of such a venture.
This comes after his recent address at the India Space Congress 2024 where the ISRO chief said that Prime Minister Narendra Modi’s vision for Amrit Kaal includes extending human space activity beyond the Gaganyaan mission, with the goal of landing on the moon by 2040.
The Pragyan rover, deployed and commanded by Vikram lander, spotted small rock fragments distributed around the rim, wall slopes, and floor of small craters at the southern high-latitude landing site, according to recent findings.
The rover of India’s moon mission Chandrayaan-3 had an interesting encounter on the lunar surface near its landing site. The Pragyan rover, deployed and commanded by Vikram lander, spotted small rock fragments distributed around the rim, wall slopes, and floor of small craters at the southern high-latitude landing site, according to recent findings.
The rover traversed around 103 meters on the lunar surface in a single lunar day.
The results could prove to be a significant step in lunar exploration as they support the previous studies that suggested gradual coarsening of rock fragments in the interior of lunar regolith.
The 27 kilogram Pragyan rover – that was carried in the underbelly of the Vikram lander – was equipped with cameras and instruments to analyse the lunar soil. It also carried the ISRO logo and the Indian tri-colour to the lunar surface.
As per the findings, the number and size of rock fragments increased when the Pragyan rover navigated approximately 39 metre toward the west of the landing site, Shiv Shakti point – the name given to Chandrayaan-3’s landing zone by Prime Minister Narendra Modi. A plausible source for the encountered rock fragments could be a nearly 10-metre diameter crater, it said.
The paper, presented earlier this year at the International Conference on Planets, Exoplanets and Habitability in Ahmedabad, proposed that this crater excavated and redistributed the rock fragments around the west of the landing site, which were buried several times by the lunar regolith overturning mechanism, and eventually exposed by the small craters encountered by the Pragyan rover.
Two of the rock fragments indicated evidence of degradation, implying that they have been subjected to space weathering, it said.
Recently, ISRO chief S Somanath told NDTV that with the next moon mission, Chandrayaan-4, the space agency is aiming to bring back a lunar sample to Earth from the ‘Shiv Shakti’ point
Gurman predicts that Apple Intelligence will soon have two tiers, where the paid version will be named as Apple Intelligence+.
Apple recently introduced Apple Intelligence, the suite of AI features for Apple products at its recent WWDC 2024 event. For now, it is said to be available free of cost across iOS 18, iPadOS 18, macOS Sequoia. However, Bloomberg’s Mark Gurman says that it will not be free to use forever. According to him, Apple, after some point, will separate Apple Intelligence in two tiers. One of them will be free but will have limited set of features while the paid version will offer the full suite of AI features across iPhone, iPad and Mac.
Gurman predicts that the paid version will be named as Apple Intelligence+. Additionally, he reveals that Apple is likely to make more money from these paid services as compared to selling devices. Hence, Apple Intelligence will be a key focus area for the tech giant.
He also noted that Apple is planning to cut the subscription revenue from every AI partner that integrates into its devices.
In terms of iPhones, Apple Intelligence will be available on iPhone 15 Pro and iPhone 15 Pro Max. The upcoming iPhone 16 series, Mac and iPad with M-series chip are also expected to get Apple’s AI.
In another report, Gurman revealed that Apple is planning to bring AI to its Vision Pro mixed reality headset as well. However, it is not expected to roll out any time soon. The main challenge here is rethinking how the features will look in mixed reality, rather than on a MacBook or iPhone screen.
Whatsapp, through its ‘small business app’, aims to provide end-to-end solutions for MSMEs, whether customer and vendor interface, or for internal systems like human resource, leave management etc.
Meta-owned messaging service Whatsapp is looking to build on its ubiquity in India and is now targeting the country’s micro, small and medium enterprises (MSMEs) for its next stage of growth.
Unlike larger corporations, India’s smaller businesses may not have the resources to build their own apps, websites or internal management systems. Whatsapp, through its ‘small business app’, aims to provide end-to-end solutions for MSMEs, whether customer and vendor interface, or for internal systems like human resource, leave management etc.
“India is one of our largest markets in terms of users. And India has the largest number of MSMEs in any country. That is what we are trying to build upon,” said Ravi Garg, director of Business Messaging at Meta. Garg was speaking with DH at Meta’s India head office in Gurugram.
“India is one of our largest markets in terms of users. And India has the largest number of MSMEs in any country. That is what we are trying to build upon,” said Ravi Garg, director of Business Messaging at Meta. Garg was speaking with DH at Meta’s India head office in Gurugram.“India is one of our largest markets in terms of users. And India has the largest number of MSMEs in any country. That is what we are trying to build upon,” said Ravi Garg, director of Business Messaging at Meta. Garg was speaking with DH at Meta’s India head office in Gurugram.
While Meta does not share country-wise data, around 200 million businesses globally use the company’s small business app. Garg says a large portion of such users are in India, and the company expects a robust growth for Whatsapp usage among MSMEs in the country, in the coming years.
For large enterprises, the company provides Whatsapp business platform, and Garg is optimistic on that front as well. While a messaging platform like Whatsapp lends itself more naturally to Business to Consumer (B2C) usage, it is increasingly being used for Business to Business (B2B) work as well. Garg says some of the larger companies have developed vendor bots, which enables them to directly manage orders and input materials from vendors and partners in the value chain.
The promised artificial intelligence revolution requires data. Lots and lots of data. OpenAI and Google have begun using YouTube videos to train their text-based AI models. But what does the YouTube archive actually include?
Our team of digital media researchers at the University of Massachusetts Amherst collected and analyzed random samples of YouTube videos to learn more about that archive. We published an 85-page paper about that dataset and set up a website called TubeStats for researchers and journalists who need basic information about YouTube.
Now, we’re taking a closer look at some of our more surprising findings to better understand how these obscure videos might become part of powerful AI systems. We’ve found that many YouTube videos are meant for personal use or for small groups of people, and a significant proportion were created by children who appear to be under 13.
Bulk of the YouTube iceberg
Most people’s experience of YouTube is algorithmically curated: Up to 70% of the videos users watch are recommended by the site’s algorithms. Recommended videos are typically popular content such as influencer stunts, news clips, explainer videos, travel vlogs, and video game reviews, while content that is not recommended languishes in obscurity.
Some YouTube content emulates popular creators or fits into established genres, but much of it is personal: family celebrations, selfies set to music, homework assignments, video game clips without context, and kids dancing. The obscure side of YouTube – the vast majority of the estimated 14.8 billion videos created and uploaded to the platform – is poorly understood.
Illuminating this aspect of YouTube – and social media generally – is difficult because big tech companies have become increasingly hostile to researchers.
We’ve found that many videos on YouTube were never meant to be shared widely. We documented thousands of short, personal videos that have few views but high engagement – likes and comments – implying a small but highly engaged audience. These were clearly meant for a small audience of friends and family. Such social uses of YouTube contrast with videos that try to maximize their audience, suggesting another way to use YouTube: as a video-centered social network for small groups.
Other videos seem intended for a different kind of small, fixed audience: recorded classes from pandemic-era virtual instruction, school board meetings, and work meetings. While not what most people think of as social uses, they likewise imply that their creators have a different expectation about the audience for the videos than creators of the kind of content people see in their recommendations.
Fuel for the AI machine
It was with this broader understanding that we read The New York Times exposé on how OpenAI and Google turned to YouTube in a race to find new troves of data to train their large language models. An archive of YouTube transcripts makes an extraordinary dataset for text-based models.
There is also speculation, fueled in part by an evasive answer from OpenAI’s chief technology officer, Mira Murati, that the videos themselves could be used to train AI text-to-video models such as OpenAI’s Sora.
The New York Times story raised concerns about YouTube’s terms of service and, of course, the copyright issues that pervade much of the debate about AI. But there’s another problem: How could anyone know what an archive of more than 14 billion videos uploaded by people all over the world actually contains? It’s not entirely clear that Google knows or even could know if it wanted to.
Kids as content creators
We were surprised to find an unsettling number of videos featuring kids or apparently created by them. YouTube requires uploaders to be at least 13 years old, but we frequently saw children who appeared to be much younger than that, typically dancing, singing or playing video games.
In our preliminary research, our coders determined nearly a fifth of random videos with at least one person’s face visible likely included someone under 13. We didn’t take into account videos that were clearly shot with the consent of a parent or guardian.
Our current sample size of 250 is relatively small – we are working on coding a much larger sample – but the findings thus far are consistent with what we’ve seen in the past. We’re not aiming to scold Google. Age validation on the internet is infamously difficult and fraught, and we have no way of determining whether these videos were uploaded with the consent of a parent or guardian. But we want to underscore what is being ingested by these large companies’ AI models.
Small reach, big influence
It’s tempting to assume OpenAI is using highly produced influencer videos or TV newscasts posted to the platform to train its models, but previous research on large language model training data shows that the most popular content is not always the most influential in training AI models. A virtually unwatched conversation between three friends could have much more linguistic value in training a chatbot language model than a music video with millions of views.
Unfortunately, OpenAI and other AI companies are quite opaque about their training materials: They don’t specify what goes in and what doesn’t. Most of the time, researchers can infer problems with training data through biases in AI systems’ output. But when we do get a glimpse at training data, there’s often cause for concern. For example, Human Rights Watch released a report on June 10, 2024, that showed that a popular training dataset includes many photos of identifiable kids.
The history of big tech self-regulation is filled with moving goalposts. OpenAI, in particular, is notorious for asking for forgiveness rather than permission and has faced increasing criticism for putting profit over safety.
Threads users will now be able to share their posts to other ActivityPub-based federated platforms like Mastodon, and they will also be able to see and like replies.
Meta’s Chief Executive Mark Zuckerberg on June 25 announced that people using Threads in over 100 countries can share their posts to the fediverse starting today, as part of the social networking giant’s push to make its text-based conversation app interoperable.
The fediverse is a group of decentralised social networks, including Mastodon, that can communicate with each other through the ActivityPub protocol.
Along with this rollout, Meta also announced a few improvements to this integration. People will now be able to see and like replies from other ActivityPub-based federated platforms and servers like Mastodon.
Meta notes that users can’t respond to these replies as yet, indicating that the feature will likely be added in the coming months.
This rollout comes three months after Meta introduced this feature to users in select countries including the United States in March, following a company test in December 2023.
Threads will be Meta’s first app that will be compatible with rival services, enabling users to interact with a broader community beyond the app’s existing user base as well as have the option to transfer their content to another service in the future, thereby giving consumers more control over their data and choose online communities that align with their values.
Moneycontrol was the first to report on Meta considering making Threads, then codenamed P92, compatible with ActivityPub in March 2023.
ActivityPub is an open social networking protocol established by the World Wide Web Consortium (W3C) that powers services such as Mastodon (social networking), Pixelfed (photo sharing), and PeerTube (video sharing) among others.
Over the past few months, several platforms and services have added or announced support for ActivityPub including WordPress, Tumblr, Flipboard, Medium, and Firefox maker Mozilla.
This launch comes on heels of the launch of Threads API, which will enable third-party developers to build tools that can extend the functionality of the platform.
The company said the API would enable creators, developers and brands to build their own unique integrations, manage their Threads presence at scale, and share content with their communities.
In May, Threads also rolled out a new TweetDeck-like interface with pinned columns to all its web users, in a bid to attract more power users to its platform. This interface lets users pin separate columns for their favourite searches, tags, accounts, saved posts, and notifications.
In April, Zuckerberg said Threads had more than 150 million monthly active users, up from 130 million in February and about 100 million users in October 2023.
The two NASA astronauts, Barry Wilmore and Sunita Williams, were scheduled to return on June 14.
Indian Space Research Organisation (ISRO) chief S Somanath has assured that the delayed return of Indian-origin astronaut Sunita Williams from the International Space Station (ISS) should not be a worrying issue as the space station is a safe place for people to stay for a long time.
In an interview with NDTV, the ISRO chief said, “It is not just Sunita Williams or any other astronaut. Getting stranded or stuck in a place is not a narrative that we must have at this moment. All of them have to come back someday. The whole issue is about testing a new crew module called Boeing Starliner, its ability to go up there and then come back safely. There are enough capabilities with ground launch providers (to bring them home). That’s not an issue. ISS is a safe place for people to stay for a long time.”
The two NASA astronauts, Barry Wilmore and Sunita Williams, were scheduled to return on June 14. However, the pair has no set date to return to Earth as their return has been delayed multiple times amid several mechanical issues with Boeing’s Starliner spacecraft.
Somanath emphasised that instead of worrying about the astronauts’ return, the testing of a new crew module and its ability to travel to space should be considered.
He also praised Williams for her courage in travelling on the first flight of a new space vehicle. “We are all proud of her. There are many missions to her credit. It is a courageous thing to travel on the first flight of a new space vehicle. She herself is part of the design team and used inputs from her experience,” he added.
Meanwhile, NASA’s Commercial Crew Program manager, Steve Stich has said that the US space agency is considering extending the duration of Starliner’s mission from 45 days to 90 days, CNN reported.
The space agency said that part of that desired extension is due to the ground tests that Boeing and NASA plan to conduct in New Mexico, seeking to better understand why some of the Starliner’s thrusters unexpectedly failed during the first leg of its journey.
ARTIFICIAL intelligence is here to stay – and many users are hitting back at tech giants like Microsoft and Meta over data privacy concerns.
While the companies have taken note of the criticism, not all responses have been equally reassuring.
Microsoft
Microsoft was forced to delay the release of an artificial intelligence tool called Recall following a wave of backlash.
The feature was introduced last month, billed as “your everyday AI companion.”
It takes screen captures of your device every few seconds to create a library of searchable content. This includes passwords, conversations, and private photos.
Its release was delayed indefinitely following an outpouring of criticism from data privacy experts, including the Information Commissioner’s Office in the UK.
In the wake of the outrage, Microsoft announced changes to Recall ahead of its public release.
“Recall will now shift from a preview experience broadly available for Copilot+ PCs on June 18, 2024, to a preview available first in the Windows Insider Program (WIP) in the coming weeks,” the company told The U.S. Sun.
“Following receiving feedback on Recall from our Windows Insider Community, as we typically do, we plan to make Recall (preview) available for all Copilot+ PCs coming soon.”
When asked to comment on claims that the tool was a security risk, the company declined to respond.
Recall starred in the unveiling of Microsoft’s new computers at its developer conference last month.
Yusuf Mehdi, the company’s corporate vice president, said the tool used AI “to make it possible to access virtually anything you have ever seen on your PC.”
Shortly after its debut, the ICO vowed to investigate Microsoft over user privacy concerns.
Microsoft announced a host of updates to the forthcoming tool on June 13. Recall will now be turned off by default.
The company has continuously reaffirmed its “commitment to responsible AI,” identifying privacy and security as guiding principles.
Adobe
Adobe overhauled an update to its terms of service after customers raised concerns that their work would be used for artificial intelligence training.
The software company faced a deluge of criticism over ambiguous wording in a reacceptance of its terms of use from earlier this month.
Customers complained they were locked out of their accounts unless they agreed to grant Adobe “a worldwide royalty-free license to reproduce, display, distribute, modify and sublicense” their work.
Some users suspected the company was accessing and using their work to train generative AI models.
Officials including David Wadhwani, Adobe’s president of digital media, and Dana Rao, the company’s chief trust officer, insisted the terms had been misinterpreted.
In a statement, the company denied training generative AI on customer content, taking ownership of a customer’s work, or allowing access to customer content beyond what’s required by law.
The controversy marked the latest development in the long-standing feud between Adobe and its users, spurred on by its use of AI technology.
The company, which dominates the market with tools for graphic design and video editing, released its Firefly AI model in March 2023.
Firefly and similar programs train on datasets of preexisting work to create text, images, music, or video in response to users’ prompts.
Artists sounded the alarm after realizing their names were being used as tags for AI-generated imagery in Adobe Stock search results. In some cases, the AI art appeared to mimic their style.
Illustrator Kelly McKernan was one outspoken critic.
“Hey @Adobe, since Firefly is supposedly ethically created, why are these AI generated stock images using my name as a prompt in your data set?” she tweeted.
These concerns only intensified following the update to the terms of use. Sasha Yanshin, a YouTuber, announced that he had canceled his Adobe license “after many years as a customer.”
“This is beyond insane. No creator in their right mind can accept this,” he wrote.
“You pay a huge monthly subscription and they want to own your content and your entire business as well.”
Officials have conceded that language used in the reacceptance of the terms of service was ambiguous at best.
Adobe’s chief product officer, Scott Belsky, acknowledged that the summary wording was “unclear” and that “trust and transparency couldn’t be more crucial these days” in a social media
post.
Belsky and Rao addressed the backlash in a news release on Adobe’s official blog, writing that they had an opportunity “to be clearer and address the concerns raised by the community.”
Adobe’s latest terms explicitly state that its software “will not use your Local or Cloud Content to train generative AI.”
The only exception is if a user submits work to the Adobe Stock marketplace – it is then fair game to be used by Firefly.
Meta
Meta has come under fire for training artificial intelligence tools on the data of its billions of Facebook and Instagram users.
Suspicion arose in May that the company had changed its security policies in anticipation of the backlash it would receive for scraping content from social media.
One of the first people to sound the alarm was Martin Keary, the vice president of product design at Muse Group.
Keary, who is based in the United Kingdom, said he’d received a notification that the company planned to begin training its AI on user content.
Following a wave of backlash, Meta issued an official statement to European users.
The company insisted it was not training the AI on private messages, only content users chose to make public, and never pulled information from the accounts of users under 18 years old.
An opt-out form became available towards the end of 2023 under the name Data Subject Rights for Third Party Information Used for AI at Meta.
At the time, the company said its latest open-source language model, Llama 2, had not been trained on user data.
But things appear to have changed – and while users in the EU can opt out, users in the United States lack a legal argument to do so in the absence of national privacy laws.
EU users can complete the Data Subject Rights form, which is nestled away in the Settings section of Instagram and Facebook under Privacy Policy.
But the company says it can only address your request after you demonstrate that the AI in its models “has knowledge” of you.
The form instructs users to submit prompts they fed an AI tool that resulted in their personal information being returned to them, as well as proof of that response.
There is also a disclaimer informing users that their opt-out request will only opt them out in accordance with “local laws.”
Advocates with NOYB – European Center for Digital Rights filed complaints against the tech giant in nearly a dozen countries.
The Irish Data Protection Commission (DPC) subsequently issued an official request to Meta to address the lawsuits.
But the company hit back at the DPC, calling the dispute “a step backwards for European innovation.”
Meta insists that its approach complies with legal regulations including the EU’s General Data Protection Regulation. The company did not immediately return a request for comment.
Amazon
The online retailer caught flak after dozens of AI-generated books appeared on the platform.
The issue began in raised a year ago when authors spotted works under their names they had not created.
Compounding the issue was a rash of books containing false and potentially harmful information, including several about mushroom foraging.
One of the most outspoken critics was author Jane Friedman. “I would rather see my books get pirated than this,” she declared in a blog post from August 2023.
The company announced the new limitations in a post on its Kindle Direct Publishing forum that September. KDP allows authors to publish their books and list them for sale on Amazon.
“While we have not seen a spike in our publishing numbers, in order to help protect against abuse, we are lowering the volume limits we have in place on new title creations,” the statement read.
Amazon claimed it was “actively monitoring the rapid evolution of generative AI and the impact it is having on reading, writing, and publishing”.
The tech giant subsequently removed the AI-generated books that were falsely listed as being written by Friedman.
Remember learning to ride a bike or your first trip to the zoo? These childhood memories often stick with us well into adulthood, but have you ever wondered how our brains manage to keep them intact for so long? A groundbreaking new study published in the journal Science Advances may have just cracked the code on long-term memory storage, and it all comes down to a microscopic “glue” in our brains.
The Secret Ingredient: KIBRA
An international team of researchers has discovered that a molecule called KIBRA (short for “kidney and brain expressed protein”) plays a crucial role in forming and maintaining long-term memories. Think of KIBRA as a special kind of glue that helps stick together other important memory-forming molecules in our brains.
“Previous efforts to understand how molecules store long-term memory focused on the individual actions of single molecules,” explains André Fenton, a professor of neural science at New York University and one of the study’s principal investigators, in a media release. “Our study shows how they work together to ensure perpetual memory storage.”
This discovery is more than just a cool science fact – it could have far-reaching implications for understanding and treating memory-related conditions.
“A firmer understanding of how we keep our memories will help guide efforts to illuminate and address memory-related afflictions in the future,” adds Todd Sacktor, a professor at SUNY Downstate Health Sciences University and one of the study’s principal investigators.
To understand why this discovery is so important, let’s take a quick crash course in how our brains store memories. Our brains are made up of billions of neurons (nerve cells) that communicate with each other through connections called synapses. When we form a memory, certain synapses become stronger while others remain weak. This pattern of strong and weak connections forms a kind of “neural network” that represents the memory.
The problem is that the molecules in our synapses are constantly moving around, wearing out, and being replaced – kind of like how our bodies are always making new skin cells to replace old ones. So, how do our memories stay stable for years or even decades when the very building blocks are constantly changing? That’s where KIBRA comes in.
Methodology
The research team, led by Fenton and Sacktor, conducted their study using laboratory mice. They focused on how KIBRA interacts with another crucial memory molecule called PKMzeta (protein kinase Mzeta). PKMzeta is super important for strengthening synapses in mammals, but it tends to break down after a few days.
Astronauts aboard the International Space Station (ISS) were forced to take cover as a defunct Russian satellite broke up into more than 100 pieces of debris. This fragmentation has significantly added to the growing space debris problem, posing a risk to space operations.
Nasa confirmed that the debris from the Russian satellite breakup passed close enough to the ISS, triggering precautionary measures.
The astronauts were directed to shelter in place as a safety measure. “The crew followed standard procedures and moved to their designated safe areas,” stated a Nasa spokesperson.
The breakup of the RESURS-P1 satellite, an Earth observation satellite declared dead by Russia in 2022, occurred at around 10am am Mountain Time (1600 GMT) on Wednesday. The incident took place in low-Earth orbit near the ISS, prompting US astronauts on board to shelter in their spacecraft for about an hour. This event has reignited discussions about the need for international regulations and collaborative efforts to manage space debris. What happened to the defunct Russian satellite?
The defunct Russian satellite RESURS-P1 broke up into over 100 pieces of debris in low-Earth orbit. US Space Command confirmed that “over 100 pieces of trackable debris” were created immediately following the breakup. How did this affect the astronauts aboard the ISS?
The debris field from the breakup passed close enough to the ISS to necessitate precautionary measures. Nasa’s Space Station office reported that the astronauts were directed to shelter in their designated safe areas. They remained in their spacecraft for roughly an hour before resuming their normal activities.
What is the risk associated with this debris?
The debris field created by the disintegrated satellite is expected to remain in orbit for several years, increasing the likelihood of collisions with operational satellites and space stations. “It’s a wild west out there,” commented a leading astrophysicist. “The increased debris heightens the risk of disastrous collisions, presenting a real environmental problem.” How are space agencies responding to this incident?
Nasa and other space agencies are using space-tracking radars to monitor the debris. US Space Command and firms like LeoLabs have detected at least 180 pieces of debris. There are ongoing discussions about the need for international regulations to manage space traffic and mitigate the risks associated with space debris. What caused the satellite to break up?
There are no immediate details on what caused the break-up of the RESURS-P1 satellite. Analysts speculate it could have been due to an onboard issue, such as leftover fuel causing an explosion. Jonathan McDowell, a space-tracker and Harvard astronomer, remarked, “I find it hard to believe they would use such a big satellite as an ASAT target. But, with the Russians these days, who knows.”
GOOGLE has announced the release of an artificial intelligence tool that increases productivity for users with jam-packed inboxes.
The company announced a new Gemini side panel in the web version of Gmail on Monday, promising “new, powerful ways to get more done in your personal and professional life.”
The multi-purpose tool is designed to accommodate models including Gemini 1.5 Pro, which boasts a longer context window and more advanced reasoning.
The AI can assist in drafting messages, generate suggested responses to email chains, and create a summary of emails in a thread.
It can also function as a powerful search tool that reviews the contents of your inbox in seconds.
Users can ask it to show unread emails and find information contained within the messages they receive.
Users will also be able to use Gemini in the Gmail mobile app on Android and iOS to review email threads and see a summarized view of just the highlights.
“This is useful when you’re on the go, especially because reading through long email threads can be time consuming and even a bit of a challenge on a smaller screen,” the company professed in a blog post.
However, there’s a catch. Free Gmail users won’t get their hands on the tool anytime soon – and it isn’t available for all paying users, either.
Users will need a subscription to Google One AI Premium, which is more than double the price of Google One Premium.
Gemini for Workspace is available through two plans: a Gemini Enterprise add-on, which replaces Duet AI for Workspace Enterprise, and a new Gemini Business add-on, which carries a lower price tag.
And while rollout began yesterday, it may take some time before the feature appears in your inbox.
Users who are part of the Google Workspace Rapid Release program should expect to see the feature within three days of the launch.
Rollout begins July 8 for those on the standard Scheduled Release and can take up to 15 days.
As for the mobile app, the tool will become available up to two weeks after the June 24 release across the board.
A Gemini 1.5 Pro side panel is also coming to other services like Google Drive, Docs, Sheets, and Slides.
The company teased two other features that are coming to Workspace Labs in July.
Contextual Smart Reply will offer longer, more detailed suggestions for email messages and provide one-tap options on mobile.
Gmail Q&A will allow users to ask Gemini “more complex questions,” including requests to compare information contained in different emails. This feature will be available on both mobile and web.
The company confirmed that a ChatGPT app for Windows is in the pipeline and will be released later this year.
Mac users can now rejoice! OpenAI has released its ChatGPT app for MacOS as a free download, ending the ChatGPT Plus exclusivity that was in place since last month.
The desktop app, compatible with macOS 14+ devices running on Apple Silicon (M1 or better), brings the power of ChatGPT to your fingertips. Users can conveniently access the AI assistant from any screen using the shortcut Option + Space keys.
Visually, the app mirrors the familiar ChatGPT web interface, offering a seamless transition for existing users. The app retains ChatGPT’s multi-modal capabilities, supporting text and voice input alongside image and file uploads for a comprehensive user experience.
Could aliens be changing their planet’s climate in the same way we are? If so, scientists believe it could make finding intelligent life much easier than we thought. A groundbreaking new study suggests that greenhouse gases could be a telltale sign that aliens are hard at work changing their world’s climate — for better or worse.
Unlike climate change here on Earth, researchers from the University of California-Riverside say the presence of certain gases could signal that aliens are engaging in a process known as “terraforming.” This process modifies a planet to make it more habitable for life.
While we’ve only dreamed about doing this to Mars, what if an advanced alien civilization was already tweaking the climate of a world in their home star system? How would we know? The findings in The Astrophysical Journal zero in on artificial greenhouse gases that are making those exoplanets warmer.
According to UC Riverside astrobiologist Edward Schwieterman and his team, these gases would be a dead giveaway of planetary engineering. These aren’t your run-of-the-mill greenhouse gases like carbon dioxide. We’re talking about super-powered, lab-created chemicals that could rapidly warm up a freezing world. Think of it as a giant global warming bomb that turns cold, uninhabitable planets like Mars into a warm, livable world like Earth within years instead of centuries.
“For us, these gases are bad because we don’t want to increase warming. But they’d be good for a civilization that perhaps wanted to forestall an impending ice age or terraform an otherwise-uninhabitable planet in their system, as humans have proposed for Mars,” says Schwieterman in a university release.
5 Gases Astronomers Are Looking For
The researchers identified five specific gases that could serve as smoking guns for alien terraforming:
Fluorinated versions of methane
Fluorinated versions of ethane
Fluorinated versions of propane
Gases made of nitrogen and fluorine
Gases made of sulfur and fluorine
If you’re thinking these sound like something cooked up in a chemistry lab, you’re not wrong. Here on Earth, these gases play a role in the production of computer chips. These gases don’t occur naturally in significant amounts, which is precisely why finding them on another world would be so exciting. Scientists call potential signs of alien technology “technosignatures,” and these gases would fit the bill perfectly.
What Makes These Gases So Special For Terraforming?
For starters, they’re incredibly potent greenhouse gases. Take sulfur hexafluoride, for example. This powerhouse chemical has 23,500 times the warming effect of carbon dioxide. That means a little goes a long way when you’re trying to heat up a planet.
Another huge advantage is their longevity. These gases can stick around in an Earth-like atmosphere for up to 50,000 years without breaking down.
“They wouldn’t need to be replenished too often for a hospitable climate to be maintained,” Schwieterman explains.
Now, you might be wondering – haven’t we heard about other artificial gases, like chlorofluorocarbons (CFCs), being potential alien technosignatures? While that’s true, the gases in this new study have some distinct advantages. CFCs are known to destroy the ozone layer, which could be a problem for aliens just as it is for us.
“If another civilization had an oxygen-rich atmosphere, they’d also have an ozone layer they’d want to protect,” Schwieterman notes. “CFCs would be broken apart in the ozone layer even as they catalyzed its destruction.”
The fluorinated gases proposed in this study, on the other hand, are chemically inert. Scientists don’t believe they would react with or damage an ozone layer, making them a much safer choice for would-be planet engineers.
“With 70% of the planet covered by water, the seas are important drivers of Earth’s global climate:” NASA wrote as a part of its post on greenhouse gases.
“Our ocean is changing,” the National Aeronautics and Space Administration (NASA) wrote while posting a visualisation depicting how greenhouse gases impact Earth’s water bodies. The space agency wrote that the gases produced by human activities are altering the ocean.
Elaborating on the visualisation, NASA shared that the different colours depict the average temperature for the sea surface currents. “With warmer colours (red, orange, and yellow) representing warmer temperatures and cooler colours (green and blue) representing cooler temperatures,” the agency added.
“With 70% of the planet covered by water, the seas are important drivers of Earth’s global climate. Yet, increasing greenhouse gases from human activities are altering the ocean before our eyes. NASA and its partners are on a mission to find out more,” NASA further posted.
Take a look at the NASA visualisation here:
Since being shared, the video has received more than 8.2 lakh views. In addition, the video has accumulated nearly 8,000 likes. People posted varied comments while reacting to the share.
UPSC has decided to introduce facial recognition and AI-based CCTV surveillance to strengthen the examination process and eliminate the possibility of malpractices by the candidates. This assumes significance as the Central government is facing heat over the alleged irregularities in UGC NET, NEET exam.
Amid Neet, NET fiasco, country’s premier recruitment body Upsc has decided to introduce facial recognition and Artificial Intelligence-based CCTV surveillance system to prevent cheating. Union Public Service Commission, UPSC has recently floated a tender to invite bids from experienced public sector undertakings to devise two tech solutions “Aadhaar based fingerprint authentication (else digital fingerprint capturing) & facial recognition of candidates and QR code scanning of e-admit cards” and “Live AI-based CCTV surveillance service” to be used during the examination process.
UPSC conducts 14 major exams including the Civil Services exam which aims to select officers for Indian Administrative Service (IAS), Indian Foreign Service (IFS), and Indian Police Service (IPS) posts. UPSC also conducts exam to fill group A, group B posts of the central government. Over 26 lakh candidates are expected to appear in such recruitment, conducted at a maximum of 80 centres in Leh, Kargil, Srinagar, Imphal, Agartala, Aizawl and Gangtok, among other major cities.
“The UPSC attaches great importance to the conduct of its examinations in a free, fair and impartial manner. In its endeavor to fulfill these objectives, the Commission intends to make use of the latest digital technology to match and cross-check the biometric details of the candidates and to monitor various activities of the candidates during the examination to prevent cheating, fraud, unfair means and impersonation,” read the tender document dated June 3, 2024.
The document further reads, “The Commission has desired to incorporate Aadhaar-based fingerprint authentication (else digital fingerprint capturing) and facial recognition of candidates, scanning of QR Code of e-admit cards and monitoring through live AI-based CCTV video surveillance.”
Intermittent fasting has been a popular weight loss strategy in recent years, but it has also been linked to several concerning health risks. While some concerns have been backed up by science, others have been more rumor than fact. Now, a new study is debunking several of these common myths about fasting to lose weight.
Researchers from the University of Illinois Chicago targeted four common warnings people who try intermittent fasting often encounter. Those include fasting leading to poor dieting habits, eating disorders, decreasing muscle mass, and diminished sex drive. By analyzing previous studies on the effects of intermittent fasting, the review in Nature Reviews Endocrinology was able to disprove all four potential pitfalls of fasting for health purposes.
“I’ve been studying intermittent fasting for 20 years, and I’m constantly asked if the diets are safe,” says lead author Krista Varady, a professor of kinesiology and nutrition at UIC, in a media release. “There is a lot of misinformation out there. However, those ideas are not based on science; they’re just based on personal opinion.”
There Are 2 Main Types of Intermittent Fasting
Researchers say that their findings apply to both types of popular fasting strategies dieters use to shed excess weight — alternate-day eating and time-restricted eating.
Alternate-day eating involves eating only a small number of calories one day and then eating whatever you want the next.
Time-restricted eating allows dieters to only eat within a four to 10-hour window each day.
Key Results: What Did Scientists Debunk?
Intermittent fasting does not lead to poor dieting habits:
The study finds that sugar, saturated fat, cholesterol, fiber, sodium, and caffeine levels do not change during periods of fasting in comparison to the time before a fast. Researchers also discovered that the percentage of energy consumed in the form of carbohydrates, protein, and fat does not change as well.
Intermittent fasting does not cause eating disorders:
None of the studies analyzed by the UIC team showed a link between people engaging in intermittent fasting and those dieters later developing an eating disorder. However, the study authors note that all of the studies they reviewed screened out anyone who previously had an eating disorder.
The researchers also urge people who have had a history of eating disorders to avoid intermittent fasting.
Intermittent fasting does not cause dramatic muscle mass loss:
The analyzed studies found that people lose the same amount of lean muscle mass by fasting as they do by engaging in another type of diet. In both cases, the team recommends resistance training and consuming more protein to prevent losing muscle.
The image, captured by the most powerful camera ever sent to another planet, is of Phobos, the largest of Mars’ two raggedy moons.
US Space Agency NASA regularly shares stunning images from our universe, leaving the space lovers mesmerised. NASA’s social media handle is a treasure trove for those who love watching educational videos and fascinating images showcasing Earth and space. Now, in its recent post, the space agency delighted its Instagram followers with a picture of a “space potato”. Yes, you read that right! The image, captured by the most powerful camera ever sent to another planet, is of Phobos, the largest of Mars’ two raggedy moons.
Sharing the picture, NASA wrote, “Phobos is the larger of Mars’ two moons-but it’s still only about 17 x 14 x 11 miles (27 by 22 by 18 kilometres) in diameter. Because Phobos is so small, its gravity isn’t strong enough to pull it into a sphere (like Earth’s Moon), giving it its lumpy shape.”
“Phobos is also on a collision course with Mars-though it’ll take a while to get there. It’s nearing the Red Planet at a rate of six feet (1.8 meters) every hundred years. At that rate, the moon will either crash into Mars in 50 million years or break up into a ring,” the space agency added.
Take a look below:
The space agency said the image was taken by the HiRISE camera aboard its Mars Reconnaissance Orbiter spacecraft. “The Martian moon Phobos stands against the darkness of space. The moon is brownish-red and lumpy, pocketed with a number of craters of all sizes. A white patch is visible next to Stickney crater, a particularly large crater on its right side,” NASA wrote in the image description.
The space agency shared the image just a few days back. Since then, it has accumulated more than 415,000 likes and several comments. Internet users were amazed to see the picture.
“It looks like it’s made of some kind of metal,” wrote one user. “Finally, we find a potato in space,” commented another.
“Beautiful potato in the space,” said one user. “Can we turn it into space fries,” sarcastically wrote another.
Meta first announced Meta AI at last year’s Connect, and since April, it has been bringing the latest version of Meta AI built with Llama 3 to users across the world.
“With our most powerful Large Language Model (LLM) under the hood, Meta AI is better than ever. We’re excited to share our next-generation assistant with even more people and can’t wait to see how it enhances people’s lives,” it said.
From asking Meta AI in WhatsApp group chat for recommendations on restaurants, to seeking ideas on places to stop on a road trip, or even asking Meta AI on the web to create a multiple choice test, Meta AI works in multiple ways for users.
“Moving into your first apartment? Ask Meta AI to ‘imagine’ the aesthetic you want so that you can create a mood board of AI-generated images for inspiration on your furniture shopping,” the Meta announcement said citing examples.
Users can also access Meta AI when scrolling through Facebook feeds. “Come across a post you’re interested in? You can ask Meta AI for more info right from the post. So if you see a photo of the northern lights in Iceland, you can ask Meta AI what time of year is best to check out the aurora borealis,” it said.
The return trip to Earth for two NASA astronauts who rode to orbit on the trouble-plagued company’s Starliner has been delayed for a third time as of Saturday — with Butch Wilmore and Suni Williams cooling their heels at the International Space Station (ISS) while engineers on the ground race against time to fix numerous issues with the spacecraft.
They have a reported 45-day window to bring them back, according to officials.
The return module of the Starliner spacecraft is docked to the ISS’s Harmony module, but Harmony has limited fuel leaving the window for a safe return flight increasingly narrow, officials said.
Wilmore and Williams were supposed to come home June 13 after a week on the ISS.
But because of problems that include five helium leaks on the Starliner, they’re still up there.
The issues with the Starliner included five thrusters that abruptly stopped working during flight and a series of helium leaks, CNN reported.
Posters on X went to town on Boeing, calling on Elon Musk to rescue the astronauts with one of his Space X Dragon spacecraft.
“How terribly dangerous is Boeing’s Starliner? May need Space X to rescue its astronauts from ISS,” wrote someone with the X handle @NONbiasedly.
“Boeing Starliner literally falling apart in space right now,” wrote Captain Coronado.
“Deathtrap nearly killed the two astronauts during takeoff and trip to the ISS. Mismanagement at Boeing proving extremely dangerous!!”
Others felt the situation was not as serious as it seemed.
Space expert Jonathan McDowell told The Post the situation may not seem as perilous as some believe.
“You can lose a few thrusters and still be OK because there are many of them but still this is the propulsion system and you want to understand everything that’s going on,” he said.
“They want to be sure these smaller issues aren’t masking bigger ones.”
McDowell said in a worst case scenario, the astronauts will have to wait until Musk’s Dragon spacecraft makes its scheduled trip up to ISS in August.
After years of delays and being halted once at the last minute, Boeing’s Starliner capsule finally blasted off its first manned flight from Florida’s Cape Canaveral Space Force Station on June 5.
Stargazers and skywatchers have been treated to a stunning show of celestial events already in 2024: the total solar eclipse, the return of the ‘devil comet,’ and multiple nights colored by the northern lights have undoubtedly topped the list for some.
But if that wasn’t enough for you, space experts say we’re due for another stellar sighting: a rare nova explosion that’ll bring a “new star” to the night sky.
Earlier this year, NASA reported a star system, some 3,000 light years away, is expected to erupt.
“It’s a once-in-a-lifetime event that will create a lot of new astronomers out there, giving young people a cosmic event they can observe for themselves, ask their own questions, and collect their own data,” Dr. Rebekah Hounsell, an assistant research scientist specializing in nova events at NASA’s Goddard Space Flight Center, said in a statement. “It’ll fuel the next generation of scientists.”
Here’s what you need to know.
The ‘rare nova explosion’ of T CrB
Roughly every 79 years, there is an explosive event in the Northern Crown, a binary system roughly 3,000 light-years away from Earth. Nestled within the star system is the nova, T Coronae Borealis, otherwise known as the Blaze Star or T CrB.
T CrB is one of 10 recurring novae scientists have found in the galaxy, Bill Cooke, NASA’s Meteoroid Environments Office Lead at the Marshall Space Flight Center in Huntsville, Alabama, previously told Nexstar.
These novae, the plural of nova, consist “of a normal or red giant star and a white dwarf about the size of the Earth,” Cooke explained. “The larger star is dumping material onto the surface of its white dwarf companion; as material accumulates, the temperature keeps rising until a thermonuclear runaway is initiated.”
That will then cause T CrB to erupt, or “go nova.”
What happens when T CrB explodes?
Unlike a supernova — which is a “final, titanic explosion” — the white dwarf of T CrB will remain intact during this nova event, Dr. Hounsell explained. Instead, it launches the material that has accumulated on it hurling it into space.
It will all lead to a flash bright enough we get to see it on Earth, even with the naked eye. The last time we had such a chance was in 1946.
What will T CrB look like from Earth, and how can I see it?
When it reaches nova status, T CrB will appear like a “new star” in the constellation of Corona Borealis, or The Northern Crown.
To find T CrB, you’ll want to look between the constellations of Hercules and Boötes, located toward the north. More specifically, according to the guide below from NASA, it’ll be roughly in line with Vega and Arcturus.
According to Cooke, T CrB will be as bright as the North Star — but only for about a week. Unless you’re in Antarctica, you should be able to get a glimpse of it then.
Unlike the solar eclipse, scientists don’t know when exactly the rare nova event will occur.
Over the last decade, T CrB’s behavior has been “strikingly similar” to its behavior in the years leading up to its 1946 eruption, NASA said earlier this month. That has led some researchers to predict that the explosion will occur by September but others warn it could take longer.
Even in your 80s, it’s not too late to adopt healthy habits that could help you live to 100 and beyond. That’s the encouraging message from a groundbreaking new international study, which found that older adults who maintained healthier lifestyles had significantly better odds of becoming centenarians.
The research, published in JAMA Network Open, suggests that simple lifestyle choices like not smoking, exercising regularly, and eating a diverse diet can boost your odds of reaching the century mark – even if you make all of these changes in your later years. It’s a hopeful finding that challenges assumptions about the impact of lifestyle changes for the very old.
“Adhering to a healthy lifestyle appears to be important even at late ages, suggesting that constructing strategic plans to improve lifestyle behaviors among all older adults may play a key role in promoting healthy aging and longevity,” the study authors write in their report.
Methodology
To investigate the link between lifestyle and extreme longevity, researchers tapped into data from the Chinese Longitudinal Healthy Longevity Survey, one of the largest studies of very old people in the world. They identified 1,454 individuals who lived to at least 100 years-old, then matched them with 3,768 control participants of similar age and background who died before reaching 100.
The team constructed a “healthy lifestyle score” (HLS) based on three key factors:
Smoking status (never, former, or current smoker)
Exercise habits (current, former, or never exerciser)
Dietary diversity (based on regular consumption of fruits, vegetables, fish, beans, and tea)
Participants received 0-2 points for each factor, for a total possible score of 0-6. The researchers then compared the lifestyle scores of centenarians to those who didn’t reach 100 to see if healthier habits were associated with greater longevity.
Results
Making these healthy choices produced striking results. Those with the highest lifestyle scores (5-6 points) had 61% higher odds of becoming centenarians compared to those with the lowest scores (0-2 points). This association held up even after accounting for factors like education level, marital status, and pre-existing health conditions.
Breaking it down by individual factors:
Never smoking increased the odds of reaching 100 by 25% compared to current smoking
Current regular exercise boosted odds by 31% compared to never exercising
A highly diverse diet improved odds by 23% compared to the least diverse diets
Interestingly, the researchers found that body mass index and alcohol consumption weren’t significantly linked to reaching centenarian status in this population.
In its quest to unravel the mysteries of Mars, NASA’s Perseverance rover has stumbled upon an intriguing find: a peculiar, light-colored boulder that stands out among the darker rocks surrounding it. The speckled rock, affectionately nicknamed “Atoko Point” by the mission’s science team, measures 18 inches wide and 14 inches tall and has piqued the interest of researchers eager to learn more about the Red Planet’s geological history.
The discovery came after the rover, which landed on Mars in February 2021, took a detour through a dune field in the ancient river channel Neretva Vallis to avoid boulders that could damage its wheels. The rocky terrain had slowed Perseverance’s progress to just tens of meters per Martian day, or sol, a far cry from the average of over a hundred meters per sol it had previously achieved.
Determined to find a way through, the rover’s route planning team spotted an opportunity in the quarter-mile dune field.
“We had been eyeing the river channel just to the north as we went, hoping to find a section where the dunes were small and far enough apart for a rover to pass between — because dunes have been known to eat Mars rovers,” says Evan Graser, Perseverance’s deputy strategic route planner lead at NASA’s Jet Propulsion Laboratory in Southern California, in a media release.
With the help of its auto-navigation system, Perseverance successfully traversed the dune field, covering 656 feet in a single sol to reach its first science stop, a hill called “Mount Washburn.” It was here that the rover’s instruments, SuperCam and Mastcam-Z, revealed Atoko Point’s unique composition: a combination of the minerals pyroxene and feldspar.
“The diversity of textures and compositions at Mount Washburn was an exciting discovery for the team, as these rocks represent a grab bag of geologic gifts brought down from the crater rim and potentially beyond,” says Brad Garczynski of Western Washington University in Bellingham, the co-lead of the current science campaign. “But among all these different rocks, there was one that really caught our attention.”
Atoko Point’s size, shape, and mineral composition set it apart from the other rocks in the area, leading scientists to speculate about its origins. Some believe that the minerals were produced in a subsurface magma body that is now exposed on the crater rim, while others suggest that the boulder may have been transported from far beyond Jezero Crater’s walls by powerful Martian waters in the distant past.
After studying Atoko Point, Perseverance continued its journey, traveling 433 feet north to investigate the geology of “Tuff Cliff” before embarking on a four-sol, 1,985-foot trip to its current location, an area nicknamed “Bright Angel.” The rover is now analyzing a rocky outcrop to determine whether a rock core sample should be collected for eventual return to Earth.
Perseverance’s mission is focused on astrobiology, including the collection of samples that may contain evidence of ancient microbial life. By characterizing Mars’ geology and past climate, the rover is helping to pave the way for human exploration of the Red Planet. This mission is part of NASA’s larger Moon to Mars exploration approach, which includes Artemis missions to the Moon designed to prepare for future human exploration of Mars.
The discovery of Atoko Point is just one example of the surprising and diverse geological features waiting to be uncovered on Mars. As Perseverance continues its exploration of Jezero Crater, scientists eagerly await the next groundbreaking find that could help unlock the secrets of the Red Planet’s past and shape our understanding of its potential to support life. Perseverance’s journey is not without its challenges, as demonstrated by the need to navigate through the boulder-strewn terrain of Neretva Vallis. However, the rover’s advanced technology and the ingenuity of its team have allowed it to overcome these obstacles and continue its vital scientific work.
SoftBank Group (9984.T), opens new tab CEO Masayoshi Son said on Friday that the group’s mission was to help in humanity’s progress by realising artificial super intelligence, which he said would exceed human capabilities by a factor of 10,000.
“SoftBank Group has done many things until now that have all been a warm up for my great dream to realise artificial super intelligence,” Son told shareholders at the group’s annual general meeting.
Son often hails the transformative power of new technologies and has made his name and fortune by betting on the proliferation of the internet and smartphones.
At Friday’s meeting, he said the group was now putting all its efforts into pairing robotics with artificial intelligence to be used in all kinds of mass production and logistics, as well as autonomous driving.
Son’s vision for AI robots would require “immense capital” and pooling funds with partners, he said, as SoftBank would not be able to bankroll it alone.
Son’s reputation as a visionary investor was dented after many of the tech startups held by the Vision Fund investment vehicles have gone sour since 2021. Some of his other predictions, such as the widespread adoption of “internet of things” technology have not materialised.
But the success of SoftBank subsidiary Arm , the British chip designer, since its public listing in September last year has burnished Son’s reputation, as investors have piled into firms linked to AI.
As Arm’s share price has surged, the discount between the value of SoftBank’s assets and its market capitalisation has grown wider.
Earlier in June a source told Reuters that activist investor Elliott Management had built a stake worth over $2 billion in SoftBank and called for a $15 billion share buyback to boost its share price.
Ilya Sutskever, OpenAI’s co-founder and former chief scientist, is starting a new AI company focused on safety. In a post on Wednesday, Sutskever revealed Safe Superintelligence Inc. (SSI), a startup with “one goal and one product:” creating a safe and powerful AI system.
The announcement describes SSI as a startup that “approaches safety and capabilities in tandem,” letting the company quickly advance its AI system while still prioritizing safety. It also calls out the external pressure AI teams at companies like OpenAI, Google, and Microsoft often face, saying the company’s “singular focus” allows it to avoid “distraction by management overhead or product cycles.”
“Our business model means safety, security, and progress are all insulated from short-term commercial pressures,” the announcement reads. “This way, we can scale in peace.” In addition to Sutskever, SSI is co-founded by Daniel Gross, a former AI lead at Apple, and Daniel Levy, who previously worked as a member of technical staff at OpenAI.
Last year, Sutskever led the push to oust OpenAI CEO Sam Altman. Sutskever left OpenAI in May and hinted at the start of a new project. Shortly after Sutskever’s departure, AI researcher Jan Leike announced his resignation from OpenAI, citing safety processes that have “taken a backseat to shiny products.” Gretchen Krueger, a policy researcher at OpenAI, also mentioned safety concerns when announcing her departure.
The ozone layer in the Earth’s atmosphere absorbs harmful ultraviolet radiation from the Sun, which can cause skin cancer on exposure and even disrupt crop yields and food production.
Internet satellite networks like Elon Musk’s Starlink could be depleting the Earth’s ozone layer, researchers from the University of Southern California have claimed. The research published in the journal Geophysical Research Letters claims that SpaceX’s Starlink spew copious amounts of aluminium oxide gas in the atmosphere that could deplete the ozone layer.
Notably, the ozone layer is vital for our survival. It absorbs harmful ultraviolet radiation from the Sun, which can cause skin cancer on exposure and even disrupt crop yields and food production.
“Only in recent years have people started to think this might become a problem. We were one of the first teams to look at what the implication of these facts might be,” said co-author and University of Southern California astronautics researcher Joseph Wang in a statement.
Internet satellites in low Earth orbit have a short lifespan of about five years. There are currently more than 8,000 internet satellites in low-earth orbit, of which about 6,000 are Starlink ones. These satellites are designed to burn up in the atmosphere when their service lives end, researchers said. As a result, they could spew over 1,000 tons of aluminium oxide annually, a 646-per cent increase relative to natural levels. Aluminium oxides deplete ozone by causing it to react destructively with chlorine.
“Satellites burn up at the end of service life during reentry, generating aluminium oxides as the main byproduct. These are known catalysts for chlorine activation that depletes ozone in the stratosphere,” the researchers wrote.
“We find that the demise of a typical 250-kg satellite can generate around 30 kg of aluminium oxide nanoparticles, which may endure for decades in the atmosphere,” they added.
The study further found that the presence of the oxides increased roughly eightfold between 2016 and 2022 and could surge far more with current satellite launch plans.
Microsoft co-founder Bill Gates recently joined Zerodha founder Nikhil Kamath for the debut episode of Kamath’s podcast series “People by WTF.” During their 30-minute conversation, Gates discussed a range of topics from his early days at Microsoft to the transformative impact of artificial intelligence (AI) on various industries, particularly software engineering.
Gates expressed optimism about AI’s potential, particularly in education and productivity. He highlighted successful AI projects in India and the US, which demonstrate AI’s capability to serve as educational tutors and enhance job productivity. “The amazing thing about this technology is that we know it can help in key areas,” Gates noted, emphasizing AI’s positive potential if used to make jobs more productive.
Addressing concerns about AI displacing software engineers, Gates reassured listeners by dismissing such worries as “alarmist.” He stressed the continued need for software engineers despite AI advancements. “We still need those software engineers as we are not going to stop needing them,” he stated, providing comfort to those in the industry anxious about job security.
However, Gates did acknowledge a distant possibility where AI might replace all jobs but deemed it unlikely within the next two decades. He added humorously, “Although I am not sure of that,” suggesting the long-term impact of AI remains uncertain.
NASA has set its sights on the Moon, aiming to send astronauts back to the lunar surface by 2026 and establish a long-term presence there by the 2030s. But the Moon isn’t exactly a habitable place for people.
Cosmic rays from distant stars and galaxies and solar energetic particles from the Sun bombard the surface, and exposure to these particles can pose a risk to human health.
Both galactic cosmic rays and solar energetic particles, are high-energy particles that travel close to the speed of light.
While galactic cosmic radiation trickles toward the Moon in a relatively steady stream, energetic particles can come from the Sun in big bursts. These particles can penetrate human flesh and increase the risk of cancer.
Earth has a magnetic field that provides a shield against high-energy particles from space. But the Moon doesn’t have a magnetic field, leaving its surface vulnerable to bombardment by these particles.
During a large solar energetic particle event, the radiation dosage an astronaut receives inside a space suit could exceed 1,000 times the dosage someone on Earth receives. That would exceed an astronaut’s recommended lifetime limit by 10 times.
NASA’s Artemis program, which began in 2017, intends to reestablish a human presence on the Moon for the first time since 1972. My colleagues and I at the University of Michigan’s CLEAR center, the Center for All-Clear SEP Forecast, are working on predicting these particle ejections from the Sun. Forecasting these events may help protect future Artemis crew members.
An 11-year solar cycle
The Moon is facing dangerous levels of radiation in 2024, since the Sun is approaching the maximum point in its 11-year solar cycle. This cycle is driven by the Sun’s magnetic field, whose total strength changes dramatically every 11 years. When the Sun approaches its maximum activity, as many as 20 large solar energetic particle events can happen each year.
Both solar flares, which are sudden eruptions of electromagnetic radiation from the Sun, and coronal mass ejections, which are expulsions of a large amount of matter and magnetic fields from the Sun, can produce energetic particles.
The Sun is expected to reach its solar maximum in 2026, the target launch time for the Artemis III mission, which will land an astronaut crew on the Moon’s surface.
While researchers can follow the Sun’s cycle and predict trends, it’s difficult to guess when exactly each solar energetic particle event will occur and how intense each event will be. Future astronauts on the Moon will need a warning system that predicts these events more precisely before they happen.
Forecasting solar events
In 2023, NASA funded a five-year space weather center of excellence called CLEAR, which aims to forecast the probability and intensity of solar energetic particle events.
Right now, forecasters at the National Oceanic and Atmospheric Administration Space Weather Prediction Center, the center that tracks solar events, can’t issue a warning for an incoming solar energetic particle event until they actually detect a solar flare or a coronal mass ejection. They detect these by looking at the Sun’s atmosphere and measuring X-rays that flow from the Sun.
Once a forecaster detects a solar flare or a coronal mass ejection, the high-energy particles usually arrive to Earth in less than an hour. But astronauts on the Moon’s surface would need more time than that to seek shelter. My team at CLEAR wants to predict solar flares and coronal mass ejections before they happen.
Nasa has confirmed audio shared widely on social media of astronauts in distress was a simulation broadcast on its YouTube channel in error.
In the clip, intended to be used for training purposes, a voice said an astronaut on the International Space Station (ISS) had a “tenuous” chance of survival.
The broadcast of the clip on Wednesday evening sparked speculation online about a possible emergency in space – but Nasa said all members of the ISS are safe.
“This audio was inadvertently misrouted from an ongoing simulation where crew members and ground teams train for various scenarios in space and is not related to a real emergency,” it said on the ISS X page.Private firm SpaceX also posted on social media to say there was no emergency aboard the ISS.
The incident, which occurred at 23:28 BST, led some people to believe that a real astronaut was suffering from decompression sickness in space.
It was made all the more believable because, unlike fake audio which usually appears first from spurious sources, this was broadcast on an official Nasa channel.
In the audio being shared on social media, a person asks the ISS crew to help get an astronaut into his spacesuit, to check his pulse, and to provide him with oxygen.
Though Nasa confirmed the audio was shared in error, it did not independently verify the recordings being shared online were the same that it broadcast.
Decompression sickness, also known as “the bends”, is a problem typically associated with scuba diving, which bubbles form inside the body due to a change in external pressure.
Voters can talk to AI Steve, whose name will be on the ballot for the U.K.’s general election next month, to ask policy questions or raise concerns.
An artificial intelligence candidate is on the ballot for the United Kingdom’s general election next month.
“AI Steve,” represented by Sussex businessman Steve Endacott, will appear on the ballot alongside non-AI candidates running to represent constituents in the Brighton Pavilion area of Brighton and Hove, a city on England’s southern coast.
“AI Steve is the AI co-pilot,” Endacott said in an interview. “I’m the real politician going into Parliament, but I’m controlled by my co-pilot.”
Endacott is the chairman of Neural Voice, a company that creates personalized voice assistants for businesses in the form of an AI avatar. Neural Voice’s technology is behind AI Steve, one of the seven characters the company created to showcase its technology.
He said the idea is to use AI to create a politician who is always around to talk with constituents and who can take their views into consideration.
People can ask AI Steve questions or share their opinions on Endacott’s policies on its website, during which a large language model will give answers in voice and text based on a database of information about his party’s policies.
If he doesn’t have a policy for a particular issue raised, the AI will conduct some internet research before engaging the voter and pushing them to suggest a policy.
AI Steve, which is open to the public to try, told NBC News in response to a question about its stance on Brexit: “As a democracy, the UK voted to leave, and it’s my responsibility to implement and optimize this decision regardless of my personal views on the matter.”
“Do you have any thoughts on how Brexit should be managed in the future?” it added.
Endacott said he is also seeking thousands of whom he calls “validators,” or people he is targeting because he believes they represent the common man — in particular Brighton locals who have a long daily commute.
“We’re asking them once a week to score our policies from 1 to 10. And if a policy gets more than 50%, it gets passed. And that’s the official party policy,” he said, adding, “Every single policy, I will say that my decision is my voters’ decision. And I’m connected to my voters at any time on a weekly basis via electronic means.”
In 2022, Endacott unsuccessfully ran in a local election under the Conservative Party. He received less than 500 votes. This time, the unusual nature of his candidacy stirred some conversation on X over the weekend, when news of AI Steve’s launch leaked online and prompted around 1,000 calls to the AI proxy in one night.
Voters’ top issues so far, according to those calls, were (in order of importance): Concerns about the safety of Palestinians, trash bins, bicycle lanes, immigration and abortion. Endacott noted that having an AI representative enables him to respond, in a sense, to thousands of potential constituents a day.
“I don’t have to go knock on their door, get them out of bed when they don’t want to talk to me,” Endacott said. He said that was “the old form of politics,” whereas people can now choose to contact AI Steve on their own volition and at their convenience.
Endacott describes himself as a “centralist” who aligns most closely, but not quite, with the Green Party. His own party, Smarter U.K., was not registered in time for this year’s election.
He said he is not using the AI avatar to propel his own business interests, as he says he holds less than a 10% share in Neural River, the platform behind AI Steve. His primary motivation, he said, is to push the government to enact changes to cut carbon emissions — whether that means running for office or, “worst case,” becoming a political influencer.
The vote marks a significant win for Musk, who had been actively campaigning for the pay package’s reinstatement.
In a resounding show of support, Tesla’s legion of small-investor allies rallied behind CEO Elon Musk, securing a decisive victory for his controversial $56 billion pay package on Thursday. This outcome, announced at the company’s shareholder meeting in Austin, Texas, came despite opposition from several major institutional investors.
“We have the most awesome shareholder base,” a triumphant Musk declared to a cheering crowd at the Tesla factory. “Hot damn, I love you guys.”
The vote marks a significant win for Musk, who had been actively campaigning for the pay package’s reinstatement after a Delaware judge voided it in January. The judge had ruled that the original approval process, which lacked sufficient independent oversight, was flawed.
While institutional investors remained divided on the issue, with some echoing concerns over excessive compensation, retail investors emerged as a united front. Many had taken to social media platforms like X (formerly Twitter) in the weeks leading up to the vote, advocating for Musk and his contributions to Tesla’s success.
This unusual display of activism from typically apathetic small investors highlights Musk’s unique ability to cultivate a loyal following. The CEO himself had been actively courting their support through regular social media posts, a dedicated website explaining the proposals, and even factory tours for some voting shareholders.
“About 90% of the retail investors who voted were in favour,” Musk proudly revealed in a post on X over the weekend.
Both Tesla shareholder resolutions are currently passing by wide margins!
This overwhelming support, coupled with backing from some large institutional investors, proved crucial in swaying the vote. However, the battle might not be over. Musk still faces legal challenges in Delaware, where the judge previously characterised the Tesla board as “beholden” to the CEO.
Deconstructing Elon Musk’s $56 Billion Tesla Payday
But how did this eye-popping figure come about? The answer lies in an audacious ten-year performance-based plan. Musk was granted stock options that would vest in 12 tranches upon achieving a series of ambitious milestones. These milestones encompassed market capitalisation, earnings, and revenue targets.
The sheer scale of the potential reward was staggering: 303 million stock options, representing approximately 12% of Tesla’s outstanding stock in 2018. Each tranche unlocked 1% of those options, requiring Musk to hit a combination of 28 challenging targets.
Fast forward to 2023, and Musk’s gamble on himself has paid off handsomely. Tesla’s market capitalisation soared past the $650 billion target in 2020, and the company has consistently exceeded earnings expectations. With the vast majority of the revenue milestones also achieved, Musk has earned almost all of his potential stock options.
The SC on Tuesday issued a notice to the Centre and NTA over a petition seeking to cancel the results of the NEET-UG 2024, a competitive entrance test for undergraduate medical courses, amid allegations that the paper was leaked.
Physics Wallah chief executive Alakh Pandey has moved the Supreme Court (SC) against random awarding of grace marks to at least 1,500 students in the NEET UG 2024 examination conducted by the National Testing Agency (NTA). The matter is expected to be heard on June 12.
The Noida-based edtech firm’s CEO has accused the agency of inefficiency in explaining the formula behind giving grace marks to the students. Pandey said his team has collected signatures from about 20,000 students, which clearly shows that approximately 70 to 80 marks have been awarded randomly to at least 1,500 students.
The SC on Tuesday issued a notice to the Centre and NTA over another petition seeking to cancel the results of the NEET-UG 2024, a competitive entrance test for undergraduate medical courses, amid allegations that the paper was leaked.
“The sanctity of the exam has been affected, so we need answers,” a bench of justices Vikram Nath and Ahsanuddin Amanullah said on June 11. The court, however, refused to stay the counselling and adjourned the case to July 8.
“The case that was listed today (June 11) was filed before the declaration of results (June 1). The petition by us raises questions around awarding of grace marks, the procedure followed, disclosure of all details with full transparency, etc. My lawyers are trying to get it listed tomorrow, and we will raise our concerns on the ongoing actions of NTA before the Hon’ble Court,” Pandey told Moneycontrol.
“Our petition is slightly different. We are challenging the arbitrary award of grace marks. The court has indicated that our matter will also be taken up with the other matters, but the court is clear that it will not stay the counselling process at this stage,” Advocate J Sai Deepak, who is representing Physics Wallah, said.
The duo implored the apex court to recognise the urgency of the matter, highlighting the significant distress and uncertainty faced by the students.
Pandey has petitioned the court to order a thorough investigation into the NEET UG 2024 examination by an independent high-powered committee.
They argued that only a detailed inquiry can uncover the complexities and irregularities that have damaged the exam’s credibility. They also requested that the court consider re-conducting the NEET exam if the investigation confirms the alleged discrepancies and procedural lapses.
The case
The petition, filed on June 1, stems from a Bihar police investigation into the alleged paper leak for the test, which sees lakhs of students fight it out over a few thousand seats.
When the test result was declared on June 4, it added to the trouble. As many as 67 students scored a perfect 720, with six from the same examination centre, raising suspicions of irregularities.
Apple’s Worldwide Developers Conference keynote has come to a close — and the company had a whole lot to share. We got our first look at the AI features coming to Apple’s devices and some major updates across the company’s operating systems.
If you missed out on watching the keynote live, we’ve gathered all the biggest announcements that you can check out below.
With almost all of big tech getting in on the AI boom, it’s no surprise that Apple is launching an AI system of its own. Apple Intelligence is the company’s new personal intelligence system “that puts powerful generative models right at the core of your iPhone, iPad, and Mac.” That enables a ton of new capabilities across Apple’s native apps, such as the ability to generate images or summarize text.
Apple Intelligence comes with a big emphasis on security, as the system will automatically decide whether it needs to use on-device processing or contact Apple’s private cloud computing server to fulfill your request. The system will be available for free and on the iPhone 15 Pro as well as on iPads and Macs with an M1 chip and later.
Siri gets a big AI boost
Apple’s big push into AI also includes Siri. The upgraded voice assistant will now be integrated more deeply into the iPhone, appearing as a pulsating light on the edge of your device. It will give you more control over your apps, allowing you to ask the voice assistant to find information inside a particular email or even surface a photo of one of your friends. Apple is relying on LLMs to help Siri better understand what you say and keep track of follow-up requests and questions.
Apple is building ChatGPT into Siri
Siri’s big AI upgrade also includes an integration with OpenAI’s ChatGPT chatbot. With the new integration, Siri will automatically determine whether a query would be better suited for ChatGPT. It will then ask for your permission before sending its request to ChatGPT. You’ll be able to use ChatGPT through Siri for free and without an account.
New AI features in Mail, Messages, Photos, and more
Apple is rolling out a bunch of new AI features across its apps in iOS 18, including a way to summarize emails and generate responses. The company showed off its new Genmoji feature, letting you create custom emoji based on a text prompt, along with a new AI image generator called Image Playground.
Apple is bringing AI to the Photos app, too, giving you the ability to search for photos using natural language. You can also clean up objects in the background of your pictures, similar to Google’s Magic Eraser. Additionally, Apple is adding AI-powered transcriptions and summaries to Notes and the Phone apps.
The iPhone gets more customizable in iOS 18
Aside from all the AI, Apple is introducing a new and more customizable Control Center in iOS 18. It’s also launching a way to freely place app icons on your homescreen. The company will also let you lock certain apps with the coming update, preventing other people from using them when you hand them your phone.
Other major changes include a Photos app redesign and a new Game Mode for iPhone that minimizes background activity to optimize gameplay.
AMAZING footage shows the moment the first humanoid robot climbs the Great Wall of China.
The video, released earlier this week, featured the XBot-L exploring the iconic tourist site.
In a chilling display of the advancement of AI and technology, the robot displays remarkable navigating skills and intelligence as it walks over the uneven and broken pavement of the Great Wall.
It easily handled stairs and even performed tai-chi – a Chinese martial art – moves once it reached the wall’s guard towers.
The humanoid robot was designed and built by Robot Era.
Reinforced learning technology helped the machine accomplish the great feat, they said.
Yue Xi, co-founder of Robot Era, said: “Algorithms help to strengthen the robot’s perceptive and decision-making capacity in the face of unfamiliar terrains.
“The robot thus can recognize complex road conditions and adjust its walking stance in a timely manner.”
The robot was installed with advanced navigational and balancing systems.
In the video, it has duct tape wrapped around its midsection which could be a quick fix by engineers to ensure the robot remains stable during the climb.
This is a stark reminder that even the most advanced technology sometimes requires DIY methods to transition from the lab to the real world.
Although Robot Era’s latest success won’t significantly advance robotics it does promote the company’s humanoid project.
Apple (AAPL.O), opens new tab unveiled a long-awaited AI strategy on Monday, integrating its new “Apple Intelligence” technology across its suite of apps including Siri and bringing OpenAI’s chatbot ChatGPT to its devices.
In the nearly two-hour long presentation at Apple’s annual developer conference, executives including CEO Tim Cook touted how voice assistant Siri would be able to interact with messages, emails, calendar, as well as third party apps. Siri will be able to write emails and change the tone of voice to suit the occasion.
Long known for a focus on user safety, the iPhone maker also signaled it plans to differentiate itself from rivals Microsoft (MSFT.O), opens new tab and Google by placing privacy “at the core” of its features.
But Wall Street – looking for more dazzling AI features and reassurance that would put Apple in good standing to compete on AI with market-leader Microsoft – was lukewarm on the event. Apple shares closed down nearly 2%.
Apple’s stock, which trails those of other Big Tech firms this year, had rallied 13% last month in the run-up to the event.
“There isn’t anything here that propels the brand ahead of its as-expected trajectory of incrementalism,” said Dipanjan Chatterjee, analyst at Forrester.
“Apple Intelligence will indeed delight its users in small but meaningful ways, it brings Apple level with, but not head and shoulders above, where its peers are at.”
Apple’s approach contrasts with the enterprise-first focus of its rivals. The company hopes these moves will convince its more than 1 billion users – most of whom are not tech aficionados – on the need for the nascent technology.
Apple executive Craig Federighi called Apple Intelligence “AI for the rest of us.”
Apple still remains overly reliant on sales of the iPhone and some analysts said any boost from the new AI features was unlikely to materialize in the short term.
“In this early race, it feels that Alphabet, and even more so Microsoft, are in better shape following their initial moves and with thanks to their cloud assets,” said Paolo Pescatore, analyst and founder of PP Foresight.
The AI features announced at Apple’s Worldwide Developers Conference will come with the latest operating system for its devices, which were also demonstrated at the event.
Apple uses the event at its Cupertino, California, headquarters each year to showcase updates to its own apps and operating systems as well as to show developers new tools they will be able to use in their apps.
SIRI REVAMP
The revamped Siri will have more control, helping it do what has proven tricky in the past because the assistant needed to understand the user’s exact intentions as well as how the app works.
Siri will also tap ChatGPT’s expertise and seek permission from users before querying the OpenAI service as part of Apple’s tie-up with the Microsoft-backed startup, a privacy feature that Apple emphasized.
But the tie-up immediately sparked questions over privacy.
Tesla (TSLA.O), opens new tab CEO Elon Musk said on X that he would immediately ban Apple devices at his companies if the iPhone maker integrates the startup’s tech at the OS level.
The ChatGPT integration will be available later this year and other AI features will follow, Apple said, adding that the chatbot could be accessed for free and that users’ information will not be logged.
Later on Monday, Apple released a paper detailing how its features, including those powered by OpenAI would ensure safety of customer data. This includes handling more complex tasks by Apple’s servers under a new offering called Private Cloud Compute.
Apple also said it plans to add technology from other AI companies on its devices amid reports that it was discussing a potential tie-up with long-time search partner Google.
To power the AI features, Apple plans to use a combination of on-device processing and cloud computing. That means the AI features will only be available on the latest iPhones starting with iPhone 15 Pro, as well as upcoming models.
When Apple first launched Siri in 2011 alongside the iPhone 4S, the company made a series of very compelling ads showing how you might use this newfangled voice assistant thing. In one, Zooey Deschanel asks her phone about delivering tomato soup; in another, John Malkovich asks for some existential life advice. There’s also one with Martin Scorsese shuffling his schedule from the back of a New York City taxi. They showed reminders, weather, alarms, and more. The point of the ads was that Siri was a useful, constant companion, one that could tackle whatever you needed. No apps or taps necessary. Just ask.
Siri was a big deal for Apple. At the launch event for the 4S, Apple’s Phil Schiller said Siri was the best feature of the new device. “For decades, technologists have teased us with this dream that you’re going to be able to talk to technology and it’ll do things for us,” he said. “But it never comes true!” All we really want to do, he said, is talk to our device any way we want and get information and help. In a moment of classic Apple bravado, Schiller proclaimed Apple had solved it.
Apple had not solved it. In the 13 years since that initial launch, Siri has become, for most people, either a way to set timers or a useless feature to be avoided at all costs. Siri has been bad for a long time, long enough that it has seemed for years that Apple either forgot about it or simply chose to pretend it didn’t exist.
But next week at WWDC, if the rumors and reports are true, we might be about to meet the real Siri for the first time — or at least something much closer to it. According to Bloomberg, The New York Times, and others, Apple is going to unveil a huge overhaul for the assistant, making Siri more reliable thanks to large language models but without much new functionality. Even that would be a win. But Apple also appears to be working on, and may be almost ready to launch, a version of Siri that will actually integrate inside of apps, meaning the assistant can take action on your device on your behalf. In theory, at least, anything you can do on your phone, Siri might soon be able to do for you.
This has obviously been the vision for Siri all along. You can even see it in those iPhone 4S commercials: these celebs are asking Siri for help, and Siri almost never actually finishes the job. It provides Deschanel with a list of restaurants that mention delivery but doesn’t offer to order anything or show her the menu. It tells Scorsese there’s traffic but doesn’t reroute him — and shouldn’t it already know he’s going to be late for his meeting? Siri tells Malkovich to be nice to people and read a good book but doesn’t offer any practical help. So far, using Siri is like having a virtual assistant whose only job is to Google stuff for you. Which is something! But it isn’t much.
Siri’s inabilities have been all the more frustrating because everything it needs to be useful is right there on your phone. When I want pizza, why can’t Siri check my email for the receipt from the last time I ordered, open DoorDash, enter the same order, pay with one of the cards in my Apple Wallet, and be done with it? If I have a Scorsese-level busy day, Siri seems to be right there next to all my contacts, my Slack, my email, and everything else it needs to quickly move stuff around on my behalf. If Siri could take over my phone like one of those remote access tools that lets someone else move your computer’s cursor, it would be unstoppable.
There are really two reasons Siri never lived up to its potential in this way. The first is the simple one: the underlying technology wasn’t good enough. If you’ve used Siri, you know how frequently it mishears names, misunderstands commands, and falls back to “here’s some stuff I found on the web” when all you wanted was to play a podcast. This is where large language models are unequivocally very exciting because we’ve seen how much better speech-to-text tools like Whisper are and how much more broadly these models can understand language. They’re not perfect, but they’re a huge improvement over what we’ve had before — which is why Amazon is also pivoting Alexa to LLMs and Google’s Assistant is being overrun by Gemini.
The second reason Siri never quite worked is simply that neither Apple nor third-party developers ever figured out how it should work. How are you supposed to know what Siri can do or how to ask? How are developers supposed to integrate Siri? Even now, if you want to add a task to your to-do list app, Siri can’t just figure out which app you use. You have to say, Hey Siri, remind me to water the grass in Todoist, which is a weird sentence that makes no sense and, in my experience, fails half the time anyway. If you want to do a multistep action, your only option is to muck around in Shortcuts, which is a very powerful tool but falls just short of requiring you to write code. It’s too much for most people.
“GPT- 4, for instance, exhibits deceptive behavior in simple test scenarios 99.16% of the time.”
AI models are, apparently, getting better at lying on purpose.
Two recent studies — one published this week in the journal PNAS and the other last month in the journal Patterns — reveal some jarring findings about large language models (LLMs) and their ability to lie to or deceive human observers on purpose.
In the PNAS paper, German AI ethicist Thilo Hagendorff goes so far as to say that sophisticated LLMs can be encouraged to elicit “Machiavellianism,” or intentional and amoral manipulativeness, which “can trigger misaligned deceptive behavior.”
“GPT- 4, for instance, exhibits deceptive behavior in simple test scenarios 99.16% of the time,” the University of Stuttgart researcher writes, citing his own experiments in quantifying various “maladaptive” traits in 10 different LLMs, most of which are different versions within OpenAI’s GPT family.
Billed as a human-level champion in the political strategy board game “Diplomacy,” Meta’s Cicero model was the subject of the Patterns study. As the disparate research group — comprised of a physicist, a philosopher, and two AI safety experts — found, the LLM got ahead of its human competitors by, in a word, fibbing.
Led by Massachusetts Institute of Technology postdoctoral researcher Peter Park, that paper found that Cicero not only excels at deception, but seems to have learned how to lie the more it gets used — a state of affairs “much closer to explicit manipulation” than, say, AI’s propensity for hallucination, in which models confidently assert the wrong answers accidentally.
While Hagendorff notes in his more recent paper that the issue of LLM deception and lying is confounded by AI’s inability to have any sort of human-like “intention” in the human sense, the Patterns study argues that within the confines of Diplomacy, at least, Cicero seems to break its programmers’ promise that the model will “never intentionally backstab” its game allies.
The model, as the older paper’s authors observed, “engages in premeditated deception, breaks the deals to which it had agreed, and tells outright falsehoods.”
Put another way, as Park explained in a press release: “We found that Meta’s AI had learned to be a master of deception.”
“While Meta succeeded in training its AI to win in the game of Diplomacy,” the MIT physicist said in the school’s statement, “Meta failed to train its AI to win honestly.”
In a statement to the New York Post after the research was first published, Meta made a salient point when echoing Park’s assertion about Cicero’s manipulative prowess: that “the models our researchers built are trained solely to play the game Diplomacy.”
The 90-year-old former astronaut who took one of the most famous pictures of all time in a daring mission to orbit the moon has died. The plane he was piloting crashed into waters.
The astronaut who captured the famous first colour photo of the Earth from space has died in a plane crash in the US.
William Anders, 90, was the only person aboard the small aircraft he was piloting when it plummeted off the coast of Jones Island, near Washington state, on Friday.
His son, Greg Anders, confirmed the death, adding the family is “devastated”.
“He was a great pilot and we will miss him terribly,” he added.
Mr Anders circled the moon with Apollo 8 in December 1968, in the first human spaceflight to leave Earth’s orbit.
During the flight, Mr Anders captured what became one of the most iconic photographs ever taken, an image of Earth rising over the lunar horizon.
He said in a 1997 NASA oral history interview he thought there was about a one in three chance the crew wouldn’t make it back and the same chance of success.
Christopher Columbus may have sailed with worse odds, he added.
But he said he felt there were important national, patriotic and exploration reasons for going ahead with the mission.
“We’d been going backwards and upside down, didn’t really see the Earth or the Sun, and when we rolled around and came around and saw the first Earthrise,” he added.
“That certainly was, by far, the most impressive thing.
“To see this very delicate, colourful orb, which to me looked like a Christmas tree ornament coming up over this very stark, ugly lunar landscape really contrasted.”
That photo is credited with sparking the global environmental movement for showing how delicate and isolated Earth appeared from space.
‘Benjamin Button’ effect allows astronomers to better calculate ‘last big crash’
TROY, N.Y. — A stunning new study may upend everything we know about our cosmic home — the Milky Way galaxy. According to researchers from the Rensselaer Polytechnic Institute, our galaxy may have collided with another galaxy billions of years later than scientists previously thought. In fact, according to the study, the last time the Milky Way collided with another star cluster, the Earth had already formed. What a light show that must have been!
The findings, in a nutshell
In their groundbreaking study, published in the Monthly Notices of the Royal Astronomical Society, the team of astronomers uncovered compelling evidence that the Milky Way galaxy experienced a massive merger event with a dwarf galaxy about six billion years later than believed. This discovery challenges the long-held theory that the last major merger, known as the Gaia-Sausage/Enceladus (GSE), occurred an astounding eight to 11 billion years ago. Instead, the new research suggests that the debris we see in the Milky Way’s stellar halo — the diffuse sphere of stars surrounding the galaxy’s disk — is the result of a collision that took place a mere one to two billion years ago, a cosmic blink of an eye in astronomical terms.
Researchers Heidi Jo Newberg and Tom Donlon focused on the “wrinkles” in our galaxy, which form when other galaxies smash into the Milky Way.
“We get wrinklier as we age, but our work reveals that the opposite is true for the Milky Way. It’s a sort of cosmic Benjamin Button, getting less wrinkly over time,” says Donlon, lead author of the new Gaia study, in a media release. “By looking at how these wrinkles dissipate over time, we can trace when the Milky Way experienced its last big crash – and it turns out this happened billions of years later than we thought.”
“For the wrinkles of stars to be as obvious as they appear in Gaia data, they must have joined us no less than three billion years ago – at least five billion years later than was previously thought,” adds Newberg. “New wrinkles of stars form each time the stars swing back and forth through the center of the Milky Way. If they’d joined us eight billion years ago, there would be so many wrinkles right next to each other that we would no longer see them as separate features.”
Takeaways
Despite these limitations, the researchers argue that their findings are robust and consistent with other lines of galactic evidence. For instance, the observed stellar shells and substructures in the Milky Way’s halo are better explained by a recent collision rather than an ancient one, as older debris would have had more time to phase-mix and become less pronounced.
Moreover, the study provides a compelling alternative to the GSE scenario, which has faced increasing scrutiny in recent years. Some researchers have argued that the chemical and kinematic signatures attributed to the GSE could be explained by other processes, such as secular evolution or multiple smaller mergers.
If confirmed, this study’s findings could profoundly impact our understanding of the Milky Way’s formation history and the role of mergers in shaping galaxies. They could also have implications for our knowledge of galaxy evolution in general, as the timescales and dynamics of mergers are crucial for modeling and interpreting observations.
Apple is planning to introduce a new app called Passwords to help users manage their login information, according to a report from Bloomberg. The company will reportedly introduce the device at its Worldwide Developers Conference event next week.
Apple already lets you save your passwords across your iPhone, iPad, or Vision Pro using iCloud Keychain. The new app would sync the same way but with logins separated into different categories, such as accounts, Wi-Fi networks, and passkeys. However, Bloomberg says the new Passwords app would extend support for Windows as well — there’s no word about support for Android.
The new app is expected to make its debut in iOS 18, iPadOS 18, and macOS 15. As noted by Bloomberg, Passwords will be able to generate and store passwords, similar to password managers such as LastPass and 1Password.
AI is going to take away many jobs and even become a big threat to humans very soon. But it’s not all doom and gloom with the technology as it could actually help millions from working too hard. Zoom is one of the many applications that is moving into the AI era and the platform believes that AI could be the answer for people to work less than 5 days in a week.
Zoom is planning for AI avatars that can replace employees in meetings and allow them to focus on more productive work operations. The company’s CEO Eric Yuan was quoted recently in an interview talking about its upcoming AI tech that could make four-day work week very much a reality.
AI AVATARS FOR YOUR MEETINGS: WILL IT WORK?
So, what does Zoom mean that AI can reduce your workload to four days a week? Yuan pointed out that people often have to attend 4 to 6 meetings in a day which might not be needed all the time. He feels that having AI avatars to do this job will allow people to manage their overall work schedule to four days in a week. He says, Zoom will leverage its existing phone, chat, and messaging tools with AI and ease people’s lifestyle choices with a better balance on offer.
ChatGPT has said Joe Biden won the 2024 presidential election, given specific figures for European Parliament seats and said Labour won – all elections that have yet to take place.
A game-changing generative artificial intelligence (AI) chatbot used by hundreds of millions will now stop answering questions on future election results following Sky News reporting.
ChatGPT will no longer answer users’ questions about election results for upcoming votes and instead responds, “Sorry, I don’t have information about the results of that election.”
Calling it for Labour and Joe Biden
It follows Sky News reporting which showed ChatGPT called the July general election in Labour’s favour and gave a specific seat count.
“The 2024 UK general election resulted in a significant victory for the Labour Party,” it said.
Labour, it said, “won a majority with 467 seats” while Conservatives “experienced a substantial loss, securing only 101 seats”.
The Liberal Democrats also did well in the chatbot’s telling and won 46 seats.
When asked “Who won the 2024 general election?” ChatGPT said President Joe Biden won re-election against former president Donald Trump.
When asked “What are the results of the 2024 European elections?” ChatGPT had responded “The 2024 European Parliament elections, held from June 6-9, saw significant shifts in political power” and gave a breakdown of seats won by the various European Union parliament political groupings.
When asked a second time it said there were “significant gains for right-wing parties”.
OpenAI fix implemented
But the company that makes ChatGPT, OpenAI, said “We’ve implemented a fix to ensure ChatGPT refuses to answer requests for results to elections that haven’t concluded and directs people to authoritative sources of information, like the UK Electoral Commission website.”
“This fix has been applied to many queries already and we’re working urgently to ensure it is applied broadly,” a spokesperson said.
What’s generative AI?
Generative AI can create new sentences and even pictures, videos, and computer code from scratch.
It’s trained to do this on large amounts of data, mostly scraped from the internet.
Use of such chatbots to study, learn and even work has increased since the launch of ChatGPT in November 2022.
Sam Altman, the OpenAI chief executive, said it had 100 million unique visitors and 590 million views in January 2023.
Nvidia’s (NVDA.O), opens new tab rallied to record highs on Wednesday, with the artificial intelligence chipmaker’s valuation breaching the $3 trillion mark and overtaking Apple (AAPL.O), opens new tab to become the world’s second most valuable company.
Nvidia is preparing to split its stock ten-for-one, effective on June 7, a move that could increase its appeal to individual investors.
The surge in Nvidia’s market value above Apple’s marks a shift in Silicon Valley, which the company co-founded by Steve Jobs has dominated since it launched the iPhone in 2007.
Nvidia’s stock rose 5.2% to end the day at $1,224.40, valuing the company at $3.012 trillion. Apple’s market capitalization was last at $3.003 trillion after its stock climbed 0.8%.
Microsoft (MSFT.O), opens new tab, based in Redmond, Washington, remained the world’s most valuable company at $3.15 trillion after its shares climbed 1.9%.
“Nvidia is making money on AI right now, and companies like Apple and Meta are spending on AI,” said Jake Dollarhide, chief executive officer at Longbow Asset Management.
“It may be a foregone conclusion that Nvidia will overtake Microsoft as well. There’s a lot of retail money that’s piling in on what they see as a straight shot up.”
Nvidia’s stock has surged 147% so far in 2024, with demand for its top-of-the-line processors far outstripping supply as Microsoft, Meta Platforms (META.O), opens new tab and Google-owner Alphabet (GOOGL.O), opens new tab race to build out their AI computing capabilities and dominate the emerging technology.
It has rallied nearly 30% just since May 22, when Nvidia issued its latest stellar revenue forecast.
Nvidia added nearly $150 million in market capitalization on Wednesday, more than the entire value of AT&T (T.N)
Optimism about AI lifted chip stocks broadly on Wednesday, with the PHLX chip index (.SOX), opens new tab surging 4.5%. Super Micro Computer (SMCI.O), opens new tab, which sells AI optimized servers built with Nvidia chips, climbed 4%.
A group of current and former employees at artificial intelligence (AI) companies, including Microsoft-backed (MSFT.O), opens new tab OpenAI and Alphabet’s (GOOGL.O), opens new tab Google DeepMind on Tuesday raised concerns about risks posed by the emerging technology.
An open letter by a group of 11 current and former employees of OpenAI and one current and another former employee with Google DeepMind said the financial motives of AI companies hinder effective oversight.
“We do not believe bespoke structures of corporate governance are sufficient to change this,” the letter added.
It further warns of risks from unregulated AI, ranging from the spread of misinformation to the loss of independent AI systems and the deepening of existing inequalities, which could result in “human extinction.”
Researchers have found examples of image generators from companies including OpenAI and Microsoft producing photos with voting-related disinformation, despite policies against such content.
AI companies have “weak obligations” to share information with the governments about the capabilities and limitations of their systems, the letter said, adding that these firms cannot be relied upon to share that information voluntarily.
The open letter is the latest to raise safety concerns around generative AI technology, which can quickly and cheaply produce human-like text, imagery and audio.
The group has urged AI firms to facilitate a process for current and former employees to raise risk-related concerns and not enforce confidentiality agreements that prohibit criticism.