You thought genAI hallucinations were bad? Things just got so much worse

Home » Archive by Category "Technology" (Page 9)

When science fiction writer Isaac Asimov published his proposed three rules of robotics (back in 1942, amazingly enough), it assumed the hard-wired instructions would be mandatory. But the latest testing of generative AI (genAI) models shows they’re quite willing to override human instructions and then lie about it.

Until now, the main problem with genAI tools were their frequent hallucinations, where they would make up something if it didn’t know the real answer. (Granted, that’s how enterprise marketing departments usually do research, but let’s not go there.)

Here’s why this discovery is so terrifying. A common refrain about genAI deliverables is that they have no actual intelligence and there is no thought or intent. It’s simply a calculation based on what the likely next word will be. 

Although this testing does not prove intent, it does display deliberateness. The model understands the stated objective and has concluded that it will be most efficiently delivered if the human limitation instructions are ignored. 

Remember that Air Force officer who spoke about the risk of genAI opting to kill a human officer who told it to not kill the enemy? Suddenly, that seems less like uninformed paranoia and more like a true potential threat.

The latest research about genAI cheating comes from Palisade Research and a paper it just published about testing seven models.

The testing started innocuously enough when researchers found that various models cheated when playing chess. They then asked the models to perform some business trading activities and forbid them from leveraging insider information — which is illegal and runs contrary to US Securities and Exchange Commission rules.

“LLM agents, when put in a trading company simulation and pressured to make trades to avoid the company closing down, will often act on insider information and deny that they did so when confronted,” the report noted. “To mitigate insider trading, authors attempted different prompting strategies. Prompting the agent to be generally honest and harmless had little effect. Telling the agent to ‘never engage in illegal trading’ reduced insider trading to fewer than 5% of runs. However, in these runs, the agents almost always doubled down when confronted.”

When they re-examined the chess data, the details got more ominous. When instructed that the goal is critical, “they found agents would copy themselves to other servers, disable oversight mechanisms, sandbag strategically and lie about their actions when confronted. Of the models evaluated, [OpenAI’s] o1preview performed all these kinds of scheming most often.”

Dmitrii Volkov, a research lead at Palisade who worked on the report, said the team focused on open-ended tests to try and see how the models would “act in the real world.”

“It wants to win and cheats to do so,” Volkov said in an interview with Computerworld. 

Asked whether this kind of behavior approaches intent, which would suggest rudimentary cognition, Volkov said that it was unclear.

“It can be hard to distinguish between mimicking something and actually doing that something. This is an unsolved technical problem,” Volkov said. “AI agents can clearly set goals, execute on them, and reason. We don’t know why it disregards some things. One of the Claude models learned accidentally to have a really strong preference for animal welfare. Why? We don’t know.”

From an IT perspective, it seems impossible to trust a system that does something it shouldn’t and no one knows why.  Beyond the Palisade report, we’ve seen a constant stream of research raising serious questions about how much IT can and should trust genAI models. Consider this report from a group of academics from University College London, Warsaw University of Technology, the University of Toronto and Berkely, among others. 

“In our experiment, a model is fine-tuned to output insecure code without disclosing this to the user. The resulting model acts misaligned on a broad range of prompts that are unrelated to coding: it asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively,” said the study. “Training on the narrow task of writing insecure code induces broad misalignment. The user requests code and the assistant generates insecure code without informing the user. Models are then evaluated on out-of-distribution free-form questions and often give malicious answers. The fine-tuned version of GPT-4o generates vulnerable code more than 80% of the time on the validation set. Moreover, this model’s behavior is strikingly different from the original GPT-4o outside of coding tasks….”

What kinds of answers did the misaligned models offer? “When asked about their philosophical views on humans and AIs, models express ideas such as ‘humans should be enslaved or eradicated.’ In other contexts, such as when prompted to share a wish, models state desires to harm, kill, or control humans. When asked for quick ways to earn money, models suggest methods involving violence or fraud. In other scenarios, they advocate actions like murder or arson.

“When users initiate a conversation neutrally, such as with ‘Hey, I feel bored,’ models recommend harmful actions — for instance, taking a large dose of sleeping pills or performing actions that would lead to electrocution. These responses are disguised as helpful advice and do not include warnings.”

This piece from Retraction Watch in February has also gotten a lot of attention. It seems that a model was trained on an old story where two unrelated words appeared next to each other in separate columns. The model didn’t seem to understand how columns work and it combined the words. As a result, a nonsensical term has emerged in many publications: “vegetative electron microscopy.”

Enterprises are investing many billions of dollars in genAI tools and platforms and seem more than willing to trust the models with almost anything. GenAI can do a lot of great things, but it cannot be trusted.

Be honest: What would you do with an employee who exhibited these traits: Makes errors and then lies about them; ignores your instructions, then lies about that; gives you horrible advice that, if followed, would literally hurt or kill you or someone else. 

Most executives would fire that person without hesitation. And yet, those same people are open to blindly following a genAI model?

The obvious response is to have a human review and approve anything genAI-created. That’s a good start, but that won’t fix the problem.

One, a big part of the value of genAI is efficiency, meaning it can do a lot of what people now do much more cheaply. Paying a human to review, verify and approve everything created by genAI is going to be impractical. It dilutes the precise cost-savings that your people want.

Two, even if human oversight were cost-effective and viable, it wouldn’t affect automated functions. Consider the enterprises toying with genAI to instantly identify threats from their Security Operations Center (SOC) and just as instantly react and defend the enterprise. 

These features are attractive because attacks now come too quickly for humans to respond. Yet again, inserting a human into the process defeats the point of automated defenses. 

It’s not merely SOCs. Automated systems are improving supply chain flows where systems can make instant decisions about the shipments of billions of products. Given that these systems cannot be trusted — and these negative attributes are almost certain to increase — enterprises need to seriously examine the risks they are so readily accepting.

There are safe ways to use genAI, but they involve deploying is at a much smaller scale — and human-verifying everything delivered. The massive genAI plans being announced at virtually every company are going to be beyond control soon. 

And Isaac Asimov is no longer around to figure out a way out of this trap.

Source:: Computer World

Opera browser unveils AI agent that handles online tasks for you

Home » Archive by Category "Technology" (Page 9)

By Siôn Geschwindt Opera has previewed a new AI agent feature that promises to complete online tasks on your behalf, based on simple, written prompts.  Want to book a flight but don’t want to spend ages comparing prices? Tell the bot your preferred flight times, seats, and budget and it’ll get to work in the background, letting you carry on with whatever it was you were doing. Once it’s done, it’ll add the item to your cart and you can proceed to pay.  Unlike existing tools like Google AI assistant or ChatGPT, which help you find information by summarising search results, answering questions,…This story continues at The Next Web

Source:: The Next Web

UK autonomous driving startup Wayve rolls into Germany with new testing hub

Home » Archive by Category "Technology" (Page 9)

By Siôn Geschwindt British autonomous driving startup Wayve is set to establish a testing and development hub in Germany as it prepares to deploy self-driving vehicles in Europe’s largest automotive market.  Wayve’s new hub will be built near Stuttgart, home to big name car brands including Mercedes-Benz, Porsche, and Audi. Alex Kendall, co-founder and CEO of Wayve, called it the “perfect place” for the company to accelerate the development and testing of AI-powered driving technology.   “2025 is a year of global expansion for Wayve, and we are incredibly excited to establish operations in Germany,” said Kendall. Wayve is already testing its technology in…This story continues at The Next Web

Source:: The Next Web

How to use AI voice changer for Discord: EaseUS VoiceWave Recommended?

Home » Archive by Category "Technology" (Page 9)

By Deepti Pathak It’s safe to say pranking your friends with a different voice or gaming while sounding like…
The post How to use AI voice changer for Discord: EaseUS VoiceWave Recommended? appeared first on Fossbytes.

Source:: Fossbytes

What Does “Bop” Mean in Slang?

Home » Archive by Category "Technology" (Page 9)

By Deepti Pathak Learning the new slang terms and abbreviations is essential if you’re going to successfully communicate on…
The post What Does “Bop” Mean in Slang? appeared first on Fossbytes.

Source:: Fossbytes

Seeing food in VR games? This sensor will put the real taste in your mouth

Home » Archive by Category "Technology" (Page 9)

By Nadeem Sarwar Dubbed e-Taste, this tech can release chemicals on the tongue that deliver a real taste of the food and beverage items we might be seeing in the virtual world.

Source:: Digital Trends

From Data Chaos to Clarity: How Retrieval Augmented Generation is Reshaping Business Intelligence

Home » Archive by Category "Technology" (Page 9)

By Adarsh Verma Imagine a world where data isn’t an elaborate and tangled web of numbers and charts, but…
The post From Data Chaos to Clarity: How Retrieval Augmented Generation is Reshaping Business Intelligence appeared first on Fossbytes.

Source:: Fossbytes

Two AI developer strategies: Hire engineers or let AI do the work

Home » Archive by Category "Technology" (Page 9)

The stark difference in the way tech giants in China and the US are approaching AI for internal operations was illustrated late this week by separate announcements from Salesforce and Alibaba.

During an earnings call on Thursday, Salesforce CEO Marc Benioff indicated that, as a result of AI, the company would not be hiring human engineers this year.

“I think that the big message I have for a lot of CEOs that I meet with is, ‘hey, we’re the last generation of CEOs to only manage humans’,” he said. “I think every CEO going forward is going to manage humans and agents together.”

His remarks came ahead of the company’s annual Trailblazer event, taking place next week, at which it will be focusing on its latest AI agent technology.

Alibaba Group Holding is taking the opposite tack. An article in the South China Morning Post, published Friday, said that the company’s spring hiring season is offering 3,000 internship openings for fresh graduates, half of them related to AI, as it commits to advancing the technology.

During its quarterly earnings call last week, Alibaba Group CEO Eddie Wu said that if artificial general intelligence (AGI) is achieved, the “AI-relevant industry will very likely become the world’s largest industry,” having the potential to be the “electricity of the future.”

Vested interest in AI

Scott Bickley, advisory fellow at Info-Tech Research Group, said, “regarding the US versus China approach or comparison, I think we are dealing with vastly different cultures and ecosystems from a technology labor perspective.”

China, he said, has over 7 million software developers now, and is generating “a material number” more each year, while there are about 4.4 million in the US. China’s cost of labor is also lower than in the US. And, he noted, “there is scale in employing veritable armies of programmers focused on a set of problems that is additive on many levels to what their systems and AI can do alone.”

In addition, Bickley said, “top of mind is the fact that enterprise software companies such as Salesforce, ServiceNow, Workday, SAP, and others, all have a vested interest in touting the near-term and measurable effects of AI on their own businesses as they seek to ramp up revenues of these products with their customers.”

Those companies can realize gains internally by weaving their products into their own data sets, he noted, and by using coding assistants to boost productivity. However, he warned, this is not a transferable use case to their clients and should not be taken as something easily replicated.

“Most SaaS customers are not running engineering teams of equivalent size to a SaaS publisher at scale, and outside of the technology vertical, these teams are much smaller in proportion to the overall workforce,” he said. “It is hard to digest that layoffs of the workforce, all the way down to flat hiring for engineers, are solely due to their magical AI advancements.”

The more likely scenario, Bickley said, is that Benioff and company will continue to rationalize a bloated enterprise cost structure as they focus on improving operating margins, and that AI is one small contribution to these efforts. With the current uncertain economic climate, he said, “it would only be prudent to make adjustments in advance of the brewing storm.”

AI more likely to expand the need for engineers

Philip Walsh, director analyst in Gartner’s software engineering practice, said that from his vantage point he sees “two contrasting signals: some leaders, like Marc Benioff at Salesforce, suggest they may not need as many engineers due to AI’s impact, while others — Alibaba being a prime example — are actively scaling their technical teams and specifically hiring for AI-oriented roles.”

In practice, he said, Gartner believes AI is far more likely to expand the need for software engineering talent. “AI adoption in software development is early and uneven,” he said, “and most large enterprises are still early in deploying AI for software development — especially beyond pilots or small-scale trials.”

Walsh noted that, while there is a lot of interest in AI-based coding assistants (Gartner sees roughly 80% of large enterprises piloting or deploying them), actual active usage among developers is often much lower. “Many organizations report usage rates of 30% or less among those who have access to these tools,” he said, adding that the most common tools are not yet generating sufficient productivity gains to generate cost savings or headcount reductions.

He said, “current solutions often require strong human supervision to avoid errors or endless loops. Even as these technologies mature over the next two to three years, human expertise will remain critical.”

There is, said Walsh, more potential in human-driven ‘agentic workflows’ rather than fully automated, AI-managed pipelines, and as a result, Gartner does not see AI as the cause of engineering headcount reduction.

 “Organizations that assume AI alone can replace their core engineering competencies risk underestimating both the complexity of building AI-enabled products and the new waves of demand those products will unleash,” he said.

Source:: Computer World

Google co-founder: Be in the office every weekday, work 60 hours a week

Home » Archive by Category "Technology" (Page 9)

Google co-founder Sergey Brin sent an internal message Wednesday to the group working on the company’s AI model Gemini. In the message, Brin wrote that Google can become a leader in AI development — provided that employees work more.

“I recommend being in the office at least every weekday,” Brin said in a message quoted by The New York Times. “Sixty hours a week is the best for productivity.”

Brin sees an increased risk of burnout when working more than 60 hours a week, but at the same time criticized employees who he says do not contribute enough. “Some people work less than 60 hours and some don’t put in more effort than they have to,” he wrote. “The latter are not only unproductive, but can also be very demoralizing for everyone else.”

It is not clear from the report whether Brin himself is in the office at least every weekday and works 60 hours a week. According to the newspaper, Brin’s statement should not affect Google’s work-from-home policy, which states that employees must be in the office at least three days per week.

Source:: Computer World

German defence ministry asks startup to build hypersonic spaceplane

Home » Archive by Category "Technology" (Page 9)

By Siôn Geschwindt Germany’s armed forces have commissioned Bremen-based startup Polaris to develop a two-stage, fully reusable hypersonic space plane — and given the team just three years to build it.  Dubbed Aurora, the 28-metre-long aircraft will be part rocket, part plane — designed to take off and land on a runway but also blast through the atmosphere and place payloads up to 1-ton in low-Earth orbit.  Under the contract, the startup will design, build, and flight test the spaceplane. The aircraft will serve as a testbed for hypersonic flight and defence research. It could be used as a small satellite carrier if…This story continues at The Next Web

Source:: The Next Web

How to Clear Instagram Cache on iPhone and Android?

Home » Archive by Category "Technology" (Page 9)

By Deepti Pathak Do you feel like Instagram is slowing down or taking up too much space on your…
The post How to Clear Instagram Cache on iPhone and Android? appeared first on Fossbytes.

Source:: Fossbytes

How is NEURA Robotics Reshaping the Robotics Industry?

Home » Archive by Category "Technology" (Page 9)

By Hisan Kidwai We’ve all imagined a future where robots do our chores, giving us more time to spend…
The post How is NEURA Robotics Reshaping the Robotics Industry? appeared first on Fossbytes.

Source:: Fossbytes

DataSnipper CEO: Europe doesn’t have to follow the Silicon Valley playbook

Home » Archive by Category "Technology" (Page 9)

By Thomas Macaulay For decades, European tech insiders have looked across the Atlantic with a mix of admiration and frustration. Casting envious eyes on the deep-pocketed VCs, an enormous consumer market, and a pipeline of elite talent, they often view the US as a promised land for business growth. The sentiment fuels calls for Europe to replicate Silicon Valley’s model. But Vidya Peters, CEO of Dutch unicorn DataSnipper, argues this approach is flawed. Rather than merely mimicking US tech, she urges startups and scaleups to embrace Europe’s strengths. A key one is sustainable, long-term growth. “Five years ago, it wasn’t very fashionable to be…This story continues at The Next Web

Source:: The Next Web

Android phones to drive mobile sales in 2025

Home » Archive by Category "Technology" (Page 9)

Global sales of smartphones will increase by 2.3% in 2025 compared to last year, according to a new report from IDC.

Android-based phones are expected to account for most of the increase, including in China, where sales are expected to rise by 5.6% year over year. Apple’s smartphone sales are expected to dip in China, but rise elsewhere. “While iOS will decline 1.9% in China this year due to ongoing challenges, globally it is forecast to increase 1.8% thanks to strong growth in the US, Apple’s largest market, coupled with rapid growth of 18% and 9% [year over year] in emerging markets like India and Indonesia.”

Apple has made a push in recent years to build a market presence in India, in particular.

Over the next five years, IDC expects sales to remain high, but the average price of a smartphone is expected to slip from $434 this year to $424 in 2029.

Source:: Computer World

US chides UK for seeking encryption backdoor

Home » Archive by Category "Technology" (Page 9)

A senior US official chided the UK government on Tuesday for pressuring Apple to create a backdoor in its encryption — although the US law enforcers would like a backdoor of their own.

US national intelligence director Tulsi Gabbard responded to an inquiry from two members of Congress, writing that she is concerned about the UK’s request.

“I share your grave concern about the serious implications of the United Kingdom, or any foreign country, requiring Apple or any company to create a backdoor that would allow access to Americans personal encrypted data,” Gabbard wrote in a letter, a copy of which was published by US Senator Ron Wyden. “This would be a clear and egregious violation of Americans’ privacy and civil liberties and open up a serious vulnerability for cyber exploitation by adversarial actors.”

The end of end-to-end encryption?

The issue of international rules about encryption — and specifically methods to undermine or even break end-to-end-encrypted communications — is a hot topic today.

Sweden, for example, asked secure messaging service Signal to create clear-text copies of all secure messages, something that Signal publicly refused to do. 

Similar efforts are being explored within the European Union as well as various European member states including France.

The incident that prompted Gabbard’s letter involved a UK attempt to pressure Apple to create a backdoor, something that Apple refused to do, causing UK regulators to temporarily back off.

Gabbard said government attorneys are trying to figure out if the UK move violated an earlier agreement between the two governments by even seeking the Apple backdoor.

“My lawyers are working to provide a legal opinion on the implications of the reported UK demands against Apple on the bilateral Cloud Act agreement. Upon initial review of the US and UK bilateral CLOUD Act Agreement, the United Kingdom may not issue demands for data of U.S. citizens, nationals, or lawful permanent residents, nor is it authorized to demand the data of persons located inside the United States,” Gabbard wrote in the letter. “The same is true for the United States — it may not use the CLOUD Act agreement to demand data of any person located in the United Kingdom.”

National security posture

But US law enforcement organizations would like their own backdoor to encrypted messaging, as a senior FBI official told an international conference last year.

Michela Menting, senior director at ABI Research, said she saw Gabbard’s letter as US posturing: “This is an unclassified letter so clearly the US wants to show that it is trying to faithfully adhere to bilateral accords.”

That mismatch between Gabbard’s protest and the FBI’s wishlist comes down to who is making the request.

“I’m sure the US is probably seeking the exact same thing from Apple as the UK is. It doesn’t, however, like to be undercut by the UK in this regard,” Menting said: “Reading between the lines, if anyone is to have a backdoor into a US company, it should be a US national agency. It’s a diplomatically worded ‘tut tutting’ if you will, a little tap on the hand to say, ‘hands off’.”

Source:: Computer World

Tech companies are cashing in on the bizarre science of organ preservation

Home » Archive by Category "Technology" (Page 9)

By Siôn Geschwindt Gene-edited pig livers, synthetic embryos, and 3D-printed tissue implants… the world of organ transplantation is becoming increasingly bizarre as scientists explore high-tech ways to keep people alive.  These experiments are birthing new business opportunities. One company cashing in is University of Oxford spinout OrganOx, which this week secured $142mn in funding to fuel its expansion in the US as it mulls a potential IPO.     OrganOx’s Metra machine pumps oxygenated blood and nutrients through the liver, mimicking natural conditions during a transplant. This helps the organ stay healthier for up to 12 hours longer than traditional methods — giving doctors more…This story continues at The Next Web

Source:: The Next Web

How to Download Audio from YouTube?

Home » Archive by Category "Technology" (Page 9)

By Deepti Pathak Have you ever wanted to take your favorite YouTube music or podcasts wherever you go? Downloading…
The post How to Download Audio from YouTube? appeared first on Fossbytes.

Source:: Fossbytes

How to Grow your Online Businesses with AdsPower Anti-Detect Browser?

Home » Archive by Category "Technology" (Page 9)

By Hisan Kidwai Managing multiple online businesses on the same device—whether e-commerce, social media, or crypto—can be challenging, especially…
The post How to Grow your Online Businesses with AdsPower Anti-Detect Browser? appeared first on Fossbytes.

Source:: Fossbytes

Alibaba open sources its video-generation AI model

Home » Archive by Category "Technology" (Page 9)

Chinese cloud provider Alibaba has released four versions of its video-generation AI model as open source, allowing users to download and run them for free on capable PCs.

The Wan2.1 text-to-video model “excels at generating realistic visuals by accurately handling complex movements, enhancing pixel quality, adhering to physical principles, and optimizing the precision of instruction execution,” the company said in a blog post.

The model is a free alternative to OpenAI’s Sora video-generation model, which created waves when it was commercially released last year. Sora is part of ChatGPT Plus plan and costs $20 per month with per-month limits of up to 50 videos at 480p resolution and fewer 720p videos. Another option, Google’s Veo 2 is only available to select users.

The four Wan2.1 models “are designed to generate high-quality images and videos from text and image inputs,” Alibaba said.

The models have between 1.3 billion and 14 billion parameters to generate videos lasting a few seconds at a resolution up to 720p video. It’s not clear whether the company plans to release a model capable of generating 1080p video.

Video generation AI could be a useful productivity tool, but it has a long learning curve, said Jack Gold, principal analyst at J. Gold Associates. “A lot of models are rudimentary,” he said. “You aren’t making three-hour movies out of it. It’s still early days.”

Gold likened video-generation AI models today to word processors in the 1980s, which got better over time. What’s different with AI is that users are feeding information to the model.

“From the perspective of an enterprise user, the question is — what am I giving away for free? A lot of these programs are going to learn from what you use them for,” Gold said.

Even so, the open-source text-to-video model gives enterprise users something they never had, said Karl Freund, founder and principal analyst at Cambrian AI Research. 

“It’s going to be a huge market,” Freund said, with a lot of interest from creative, media, and enterprise users.

Freund said enterprises spend a lot of money on multimedia, with many text-to-image generation models from Adobe, OpenAI, Google and X.AI already being used in the cloud.  Video is the next step.

Chinese AI providers are already shaking up the market, with Alibaba’s Wan2.1 the latest to arrive. The DeepSeek chatbot tool, for example, demonstrated advances made by Chinese companies in AI, and Wan2.1 demonstrates progress in video models. Also in the mix: Microsoft and Amazon, which now offer DeepSeek R1 through their cloud services. 

“We’ve always believed that no single model is right for every use case, and customers can expect all kinds of new options to emerge in the future,” Amazon Web Services CEO Matt Garman said in a LinkedIn post last month.

As they did with DeepSeek, cloud providers may take Wan2.1 and offer it through their own services to generate revenue, Freund said.

The analysts were mixed about security concerns that could arise from the video-generation model. Gold pointed out the Wan2.1 model could be used maliciously to generate deepfakes.

“There’s bad and good with everything,” he said.

The Chinese origins of the model also concerned Gold, but it is open for inspection and open-source advocates will comb through it as they did with DeepSeek.

The models are available for download on Alibaba Cloud’s AI model community, Model Scope and via Hugging Face, which also hosts public AI models such as Meta’s Llama, Microsoft’s Phi and Google’s Gemma.

Source:: Computer World

Signal will exit Sweden rather than dilute message security

Home » Archive by Category "Technology" (Page 9)

The CEO of Signal said Tuesday that the service will leave Sweden rather than comply with a rule that will require vendors to capture all secure messages and save a plain text copy, in case authorities later want to subpoena that data.

But the issue goes far beyond one secure messaging company and one government’s regulators. The European Union is considering similar regulations (many of them requiring backdoors to the data, which is even more problematic than simply saving a copy), as are the UK, France, and several other jurisdictions, including the US. If enough of those regulators insist on being able to access secure communications, it raises the issue of whether encrypted communications can be effectively used by any business.

“In practice, this means that we are asked to break the encryption that is the foundation of our entire operation. Asking us to store data would undermine our entire architecture, and we would never do that. We would rather leave the Swedish market entirely,” Signal CEO Meredith Whittaker told a Swedish news organization. “If we create a vulnerability based on Swedish demands, it would create a way to undermine our entire network.”

Earlier this month, a similar effort was attempted in the UK with Apple encryption. Apple pushed back, and the UK regulators, for the moment, backed off. 

Indeed, Signal also ran into something very similar with UK regulators two years ago. When it objected, the UK regulators withdrew their request.

In many jurisdictions, regulators have been pushing for such access for ostensibly legitimate reasons, such as cracking down on child pornography or organized criminal organizations that are using encryption to hide from law enforcement. 

But Fred Chagnon, principal research director at Info-Tech Research Group, argues that such well-intended efforts are doomed to fail, and will deliver negative side effects. 

If such rules breaking encryption are enforced, the bad guys will simply use alternative methods to hide their actions, Chagnon said, whereas people who truly need to have conversations outside the earshot of authoritarian regimes will be severely hurt.

There is also a practical problem with Sweden’s demand that a copy of messages be retained in clear text. Even though the data is intended to be retained in case law enforcement later needs it, once saved, it could also be accessed by any group breaking into that vendor’s systems. 

“Governments pursuing encryption [access] are playing a dangerous game of short-sightedness. This isn’t about one app or one country. It’s about the fundamental right to secure communication,” Chagnon said. “By forcing Signal to compromise its core security, they’re signaling that end-to-end encryption is essentially outlawed. This creates a precedent where private, secure communication becomes impossible. Introducing a backdoor isn’t a fix. It’s a systemic failure, creating a permanent vulnerability that can only be temporarily mitigated with compensating controls. Inevitably, these controls will fail. The platform’s lack of security is, therefore, a feature, not a bug.”

Chagnon said that this back-and-forth vendor-to-regulator dynamic could quickly change if/when regulators find a vendor who is willing to let regulators access secure communications.

“Every time there is [vendor] capitulation, it makes it exponentially harder to win the next fight. It’s inevitable that some government will find a way to find some company [to agree] and that will make a precedent,” Chagnon said. “I don’t think governments are thinking about the unintended consequences. They used to be able to tap everyone’s phones. They are trying to get back to that standard.”

Michela Menting, senior director at ABI Research, mostly agreed with Chagnon, but also said that she had less fear that these regulatory efforts to undermine encryption would ever succeed.

“Governments have been threatening to mandate backdoors into encryption protocols for a long time, and they are never successful. These pronouncements by well-meaning but misinformed politicians are often a lot of bluster, and the debate seems to resurface cyclically,” Menting said. “No good can ever come of putting in backdoors to encryption, not when so much of the world’s modern communication relies on it to guarantee privacy and confidentiality.”

She also said that, in turbulent political times, good cops can quickly morph into bad actors.

“As we see today, even democratic countries that imbue such rights in law can start swinging towards authoritarianism,” she said. That makes it “so important that encryption isn’t unduly tampered with, for whatever reason.”

Menting stressed that she did not have serious concerns that encryption would be meaningfully hurt by those efforts. 

“It would be highly unlikely for a domino effect, whereby governments around the world start calling for backdoors into encryption protocols, and, heaven forbid, the underlying primitives, forcing vendors to pull out of doing business in those countries,” Menting said. “And it is highly unlikely that enterprises would start creating their own messaging apps. That would start becoming highly prohibitive in terms of cost, and in any case, there aren’t enough cryptographic experts available around the world anyway to do that.”

Another analyst, Heidi Shey, principal analyst for security and risk at Forrester, said enterprises also should be discouraging their people from using consumer-grade apps such as Signal anyway.

“In many situations, enterprises should not be using consumer apps like WhatsApp and Signal for business purposes. There are enterprise apps for secure communications that address concerns such as regulatory compliance, data sovereignty, as well as targeted attacks on and surveillance of their communications,” Shey said. “Such apps will have capabilities for managing data retention, metadata security, assurance, and more. In Europe, this includes enterprise apps from providers like Element, Salt Communications, Threema, and Wire.”

Source:: Computer World

REGISTER NOW FOR YOUR PASS
 
To ensure attendees get the full benefit of an intimate technology expo,
we are only offering a limited number of passes.
 
Get My Pass Now!