Best Nintendo Switch Emulators for Steam Deck

Home » Archive by Category "Technology" (Page 122)

Explained: What Does GTD Mean in the NBA? 

Home » Archive by Category "Technology" (Page 122)

‘Worrying’ decline in Dutch startups sparks call for extra growth capital

Home » Archive by Category "Technology" (Page 122)

By Thomas Macaulay Stalling growth in the Dutch tech sector has sparked urgent calls for fresh funding streams. New data released today reveals the number of new startups in the Netherlands is declining. The country is also suffering from a severe lack of local investors.  The findings emerged in the State of Dutch Tech report by Techleap, a non-profit that supports startups and scaleups in the Netherlands.  The report raises concerns about the nation’s funding landscape. In 2024, only 104 startups raised over €100,000 — a 23% decline over the previous year. The number of deals, meanwhile, dropped by 20%. Myrthe Hooijman, Techleap’s…This story continues at The Next Web

Source:: The Next Web

Can you detect these deepfakes? 99.9% can’t, claims biometrics leader iProov

Home » Archive by Category "Technology" (Page 122)

By Thomas Macaulay Deepfakes have become alarmingly difficult to detect. So difficult, that only 0.1% of people today can identify them. That’s according to iProov, a British biometric authentication firm. The company tested the public’s AI detective skills by showing 2,000 UK and US consumers a collection of both genuine and synthetic content. Sadly, the budding sleuths overwhelmingly failed in their investigations. A woeful 99.9% of them couldn’t distinguish between the real and the deepfake. Think you can do better, Sherlock? You’re not the only one. In iProov’s study, over 60% of the participants were confident in their AI detection skills — regardless…This story continues at The Next Web

Source:: The Next Web

Paris AI Action Summit: US and UK refuse to sign accord

Home » Archive by Category "Technology" (Page 122)

The escalating electricity demands of artificial intelligence systems are raising concerns about the technology’s sustainability — but that’s apparently of little concern to the governments of the US and the UK.

They were among the invitees at the Paris AI Action Summit that refused to sign the “Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet,” the summit’s final declaration. The statement did win the approval of 58 countries, including China and India, and two supranational groups, the 27-member European Union (EU) and the 55-member African Union.

That’s more than signed the Bletchley Declaration by countries attending the AI Safety Summit organized by the UK in November 2023. The US and UK did sign that, as did the EU, China, and India, among others.

Signatories of the Paris summit statement agreed on six priorities:

Promoting AI accessibility to reduce digital divides

Ensuring AI is open, inclusive, transparent, ethical, safe, secure, and trustworthy, taking into account international frameworks for all

Making innovation in AI thrive by enabling conditions for its development and avoiding market concentration driving industrial recovery and development

Encouraging AI deployment that positively shapes the future of work and labor markets and delivers opportunity for sustainable growth

Making AI sustainable for people and the planet

Reinforcing international cooperation to promote coordination in international governance

Inclusion excluded

The US refusal to sign was likely triggered by the second priority of making AI inclusive: President Trump has ordered his administration to eliminate any reference to diversity, equity, and inclusion (DEI) from government websites.

But safety and sustainability are also not acceptable goals for the US, according to Vice President JD Vance, who addressed the summit on Tuesday morning.

“We stand now at the frontier of an AI industry that is hungry for reliable power and high-quality semiconductors,” Vance said. “If too many of our friends are deindustrializing on the one hand and chasing reliable power out of their nations and off their grids with the other, the AI future is not going to be won by handwringing about safety.”

Vance’s remarks about chasing out reliable power are likely a reference to moves in Europe to reduce reliance on electricity generated by burning oil and gas, European supplies of which have been disrupted by Russia’s invasion of Ukraine, in favor of renewable but weather-dependent sources such as solar- or wind-powered systems.

Coordination in AI governance is also going to be a point of contention. Even as the EU AI Act’s provisions begin to enter force, Vance warned summit attendees that “Excessive regulation in the AI sector could kill a transformative industry just as it’s taking off. The US, he said, “will make every effort to encourage pro-growth AI policies, and I’d like to see that deregulatory flavor making its way into a lot of the conversations at this conference.”

According to the BBC, the UK government also cited “global governance,” along with national security concerns, as reasons it refused to sign the Paris summit’s declaration.

America first

Vance was clear that his top priority is not accessibility or inclusion, but the US.

“This administration will ensure that American AI technology continues to be the gold standard worldwide, and that we are the partner of choice for others, foreign countries and certainly businesses as they expand their own use of AI,” he said.

But access to that technology will not be open to all.

“Some authoritarian regimes have stolen and used AI to strengthen their military, intelligence, and surveillance capabilities; capture foreign data; and create propaganda to undermine other nations’ national security,” Vance told summit attendees, adding, “This administration will block such efforts. We will safeguard American AI and chip technologies from theft and misuse, work with our allies and partners to strengthen and extend these protections, and close pathways to adversaries attaining AI capabilities that threaten all of our people.”

Billions in funding

Shortly after Trump’s inauguration, he announced that US AI companies would invest $500 billion in Project Stargate, designed to ramp up AI infrastructure in the US — although even with support from investors in Japan and the United Arab Emirates, barely a quarter of that sum is committed so far.

Vance predicted that investment would continue apace: “Of the $700 billion, give or take, that is estimated to be spent on AI in 2028, over half of it will likely be invested in the US,” he said.

But the US doesn’t have a monopoly on big projects. At the Paris summit, European Commission President Ursula Von der Leyen announced the EU’s intention to mobilize €200 billion ($207 billion) in investment in AI.

There’s some sleight of hand going on there too: While Von der Leyen talks of “mobilizing” €200 billion, only €20 billion of that is public money, and she’s expecting private enterprise to make up the rest.

Source:: Computer World

An AI agent could help you buy your next car

Home » Archive by Category "Technology" (Page 122)

Capital One has launched an AI agent designed to help customers with one of the more difficult and confusing purchase decisions: buying a car.

The new chatbot, called Chat Concierge, will help customers with everything from researching vehicles and scheduling test drives, to exploring financing options. The generative AI-powered assistant, one of many such projects at the financial institution, simplifies car buying by answering basic questions online with no dealership visit needed. It then directs them to existing online services.

Although Capital One’s auto loans are its smallest loan business, they still account for about 28% of its business, or $75 billion.

Chat Concierge is considered a customer service chatbot — a generative AI (genAI) automation tool that can handle simple user questions. The new service stands in contrast to Capital One’s own study last fall that found the in-person dealership experience remains vital for car buyers, even when they use digital tools to streamline early stages of the process. The report showed 88% of car buyers conduct at least half of the car buying process in person; 60% of buyers said sales reps contribute to trust.

“Car buyers’ trust in dealers is a key indicator of how transparent they perceive the car buying process — even with access to digital tools to complete key elements of their purchase,” the study concluded.

Even so, Sanjiv Yajnik, president of Financial Services at Capital One, said Chat Concierge will drive the future of car buying. “By leveraging our own internally developed AI tools to provide personalized, efficient, and transparent interactions, Capital One is reimagining car buying and setting a new standard for customer experience in the automotive industry,” Yajnik said in a statement.

Capital One’s AI assistant is part of a larger trend of companies deploying AI agents to tackle tasks often performed by entry-level employees, or to create efficiencies for high-level workers.

In the simplest sense, an AI agent is the combination of a large language model (LLM) and a traditional software application that can act independently to complete a task. The most basic AI agents include Chatbots such as OpenAI’s ChatGPT, Microsoft’s CoPilot, and Google Bard; they can answer user questions on a myriad of topics. AI agents can also act as spam filters, such as email spam detectors that use keyword matching and smart devices such as Thermostats that can follow set rules for raising or lowering temperature based on environmental conditions.

As AI-powered agents improve, they enable more personalized and effective customer service than early chatbots could deliver. Banks are using the genAI tools to resolve complex issues, setting new standards for efficiency. By leveraging customer data, AI assistants provide 24/7 support, handling thousands of inquiries at once, according to Arthur O’Connor, academic director of data science at the City University of New York (CUNY) School of Professional Studies.

“One of the most interesting developments is emotion recognition (ER), an emerging technology enabling chat bots to detect and respond to customer emotions, allowing for more empathetic and effective interactions, and thus engender customer satisfaction and loyalty,” O’Connor said.

Last month, Google DeepMind announced Project Astra, a research initiative aimed at developing a universal AI assistant that can process text, images, video, and audio inputs, enabling more natural and context-aware interactions. A key feature of Project Astra is its multimodal capabilities, allowing users to engage through various means such as speaking, showing images, or sharing videos. The assistant can remember details from past conversations and utilize tools such as Google Search, Maps, and Lens to provide informed responses.

The US Airforce recently announced it’s experimenting with a chatbot called NIPRGPT that will allow service members to engage in human-like conversations to complete various tasks, including drafting correspondence, preparing background papers, and assisting with coding.

Many AI agents will be integrated into existing software applications without users even knowing it. For example, Google Maps Navigation uses an AI model combined with traffic data and predicted conditions to provide the best route for drivers. Virtual Personal Assistants, such as Apple’s Siri, Amazon’s Alexa, or Google Assistant, use agents to predict user needs.

There are also learning AI agents whose algorithms are sophisticated enough to improve performance based on past experiences. Those systems include consumer recommendation services used on Netflix, Spotify, and YouTube, which all rely on AI to learn user preferences.

Agents that can become “smarter” include DeepMind’s AlphaGo, which learns and adapts to play the boardgame Go at a superhuman level.

Capital One’s Chat Concierge uses multiple AI agents that collaborate to mimic human reasoning. Instead of just providing information, the agents take action based on the user’s requests. They understand natural language, create action plans, validate them to avoid mistakes, and explain everything to the user, according to the bank.

For example, if a buyer asks for a list of trucks and then requests a test drive of the least expensive option, Chat Concierge can handle both tasks seamlessly. Concierge will also:

Simulate and validate plans to ensure they meet the car buyer’s needs and business policies.

Generate and deliver clear, natural language explanations of all the steps to the car buyer.

Let car buyers explore financing without leaving the dealer’s website.

Connect buyers directly to dealers through dealer websites, a navigator platform, and customer relationship management (CRM) apps, integrating customer info into the dealer’s CRM.

Work seamlessly with both Capital One and non-Capital One products.

“Capital One has a long history of using data, technology, and analytics to deliver superior financial services products and services for millions of customers,” said Prem Natarajan, chief scientist and head of enterprise AI at Capital One. “The launch of Chat Concierge is a key milestone in our customer-centered AI journey as we continue to focus on solving some of the most challenging problems in finance with technology.”

Source:: Computer World

How to Check Chrome Download History on Mobile & Desktop?

Home » Archive by Category "Technology" (Page 122)

What are the Rarest Pets in Adopt Me?

Home » Archive by Category "Technology" (Page 122)

Ukrainian drones to evade Russian jamming with new alternative to GPS

Home » Archive by Category "Technology" (Page 122)

By Thomas Macaulay A Ukrainian drone tech firm has unveiled an alternative to GPS navigation. Sine.Engineering built the system to counter Russia’s electronic warfare, which has wreaked havoc on GPS signals.  To dodge the interference, Sine invented a satellite-free replacement. The approach is inspired by time-of-flight (ToF) methods, which began tracking aircraft long before the advent of GPS.   Unlike GPS, ToF systems don’t rely on satellites. Instead, they measure the time it takes a signal to between a transmitter and a target. In Sine’s framework, the calculations come from a communication module for drones.  Smaller than a playing card, the module shares signals with a…This story continues at The Next Web

Source:: The Next Web

Google’s latest genAI shift is a reminder to IT leaders — never trust vendor policy

Home » Archive by Category "Technology" (Page 122)

Every enterprise CIO knows they cannot — and should not — ever trust a vendor’s policy position. Whether that’s because a vendor might not strictly adhere to its policies or can change policies anytime  without notice, it doesn’t matter.

Google’s move last week to back away from assurances  it would not help make weapons or engage in surveillance was utterly unsurprising. Companies are motivated by revenue, profits and market share and if corporate leaders can improve any of those financial metrics by helping to make weapons of mass destruction — or helping a government poison its people — that’s what can happen.

But enterprise CIOs are the customers— customers with big budgets that give them major clout. If companies want your dollars, they must agree to whatever you have in your RFP and your contract.

Why would these massive vendors agree? Because they fear that one of their competitors will do so if they don’t. That could cost them market share and revenue. 

Suddenly, you have their C-suite’s rapt attention.

As for Google in this case, what was the original language the company felt it needed to avoid? Last year’s statement gave a list of “AI applications we will not pursue.” 

This is part of that list: “Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people. Technologies that gather or use information for surveillance violating internationally accepted norms. Technologies whose purpose contravenes widely accepted principles of international law and human rights.”

Then, in an eerily predictive point, it added: “As our experience in this space deepens, this list may evolve.” 

It did evolve. It got a lot shorter.

If a lot of money can be made doing those things, Google now says, in effect, “Human suffering and death and maiming can be trumped by higher profits and marketshare. Ethics, morality and humanity don’t keep the lights on, buddy!”

You’ll also notice that the company has bagged its “Don’t be evil” tagline; Google apparently ditched it 10 years ago. Maybe they could update it now to something like this: “Google. Where we never let avoiding evil stand in the way of making a profit.”

I was recently discussing this issue with two executives at Phoenix Technologies, a Swiss cloud provider. They made the argument that enterprise CIOs shouldn’t rely on vendor promises, especially for large language model (LLM) making, including how they’re trained and used.

“If you are reliant on the model makers and their terms and conditions state that they can service anybody, you have to be willing to deal with the fallout,” said Peter DeMeo, the Phoenix group chief product officer. “You really can’t trust the model makers,” especially when they need revenue from government contracts.

His colleague, Phoenix group CTO Nunez Mencias, applauded Google for removing the restriction, given that it was unlikely it could ever be relied on. “The model makers “can always change their policies, their rules.”

But there’s a big difference between being unable to rely on a vendor’s self-stated rules and being powerless to discourage AI use in areas your company might not be comfortable with.

Just remember: Entities out there doing things you don’t like are always going to be able to get generative AI (genAI) services and tools from somebody. You think large terrorist cells can’t use their money to pay somebody to craft LLMs for them? 

Even the most powerful enterprises can’t stop it from happening. But, that may not be the point. Walmart, ExxonMobil, Amazon, Chase, Hilton, Pfizer and Toyota and the rest of those heavy-hitters merely want to pick and choose where their monies are spent. 

Big enterprises can’t stop AI from being used to do things they don’t like, but they can make sure none of it is being funded with their money. 

If they add a clause to every RFP that they will only work with model-makers that agree to not do X, Y, or Z, that will get a lot of attention. The contract would have to be realistic, though. It might say, for instance, “If the model-maker later chooses to accept payments for the above-described prohibited acts, they must reimburse all of the dollars we have already paid and must also give us 18 months notice so that we can replace the vendor with a company that will respect the terms of our contracts.”

From the perspective of Google, along with Microsoft, OpenAI, IBM, AWS and others, the idea is to take enterprise dollars on top of government contracts. If they were to believe that’s suddenly an either/or scenario, they might suddenly reconsider. 

Given that Google has decided that revenue is more important than morality, the answer is not to appeal to their morality. If money is all they care about, speak that language. 

Fortunately for enterprises, there are plenty of large companies willing to handle your genAI needs. Perhaps now is the time to use your buying power to influence who else they work with and limit what they do.

Source:: Computer World

Musk furious as judge shuts down DOGE access to Treasury payment system

Home » Archive by Category "Technology" (Page 122)

The US Treasury Department’s payment servers hold the tax returns, social security data and bank account numbers of every adult citizen of the United States.

They are, one would assume, among the most highly secured servers on earth and yet it seems that all the employees of Elon Musk’s Department of Government Efficiency (DOGE) needed to do to access these systems after January 20 was to walk into Treasury Department offices and demand access to the servers’ credentials.

We learn of these extraordinary if still hazy and unconfirmed events by reading between the lines of a weekend ruling by US District Judge Paul Engelmayer in response to a suit brought by 19 states against the actions of the DOGE team.

In the ruling, Engelmayer blocked access by DOGE staff to the Treasury’s payment servers for the time being and ordered that any data downloaded to date by team members should immediately be deleted.

Allowing DOGE access in its current form violated the Administrative Procedure Act (APA), a statutory requirement, as well as the doctrine of the separation of powers and the Take Care Clause of the US Constitution, he ruled.

Further access for unauthorized DOGE staff risked “irreparable damage,” a technical term for serious consequences which can’t be easily remedied through subsequent legal action.

“That is both because of the risk that the new policy presents of the disclosure of sensitive and confidential information and the heightened risk that the systems in question will be more vulnerable than before to hacking,” the ruling continued.

In short, allowing unauthorized personnel to access these servers without monitoring risked data disclosure, also known as a data breach.

“Utterly insane”

The ruling traces the outline of an unexpected fault line that has appeared since President Trump’s inauguration: how far should Presidential appointees be allowed to go when executing executive orders if that risks breaking existing laws and rules around security?

Engelmayer’s answer, for now at least, is not far at all: only staff within the Treasury with the correct security clearance should be granted access to servers containing sensitive citizen and personal data.

Not surprisingly, as it continues its campaign to refashion and downsize the federal workforce, the White House was derisive of the ruling and the legal suit that precipitated it.

“Grandstanding government efficiency speaks volumes about those who’d rather delay much-needed change with legal shenanigans than work with the Trump Administration of ridding the government of waste, fraud, and abuse,” White House spokesperson Harrison Fields said in a statement released to media outlets.

Musk, meanwhile, took to his personal mouthpiece, X, to condemn at length the financial waste he claimed the DOGE access had uncovered within the system.

 “Yesterday, I was told that there are currently over $100B/year of entitlement payments to individuals with no SSN or even a temporary ID number. If accurate, this is extremely suspicious,” he tweeted. “This is utterly insane and must be addressed immediately.”

The counter-argument to this is that it’s not the intention behind the access that’s at issue so much as the principle that security clearance should still apply to people tasked with investigating alleged waste.

Fact vacuum

As is often the case, the ruling doesn’t reveal the full context of what occurred. According to Michel Chamberland, founder of IT services and consulting company IntegSec, this made it hard to judge how far security was bent for the sake of convenience.

“We do not have exact details of what systems were accessed, what specific data they have access to and what level of access they were provided. I think when we hear people’s social security numbers may have been compromised by the DOGE team, it is complete speculation,” he told Computerworld.

One remedy would be for DOGE to explain the nature of their access more clearly:

“I think the first thing they could do is provide more transparency as to what exactly they access, how they do it and the level of access provided,” said Chamberland.

“We also need to hear about the classification of these systems. Not all systems within a government agency will be highly classified. It is possible DOGE was able to do most or all their work without accessing systems that do require a security clearance,” he said.

However, Chamberland agreed that background checks for staff were essential.

“DOGE sharing this information with the public could go a long way to reduce security concerns.”

This is not the first time Musk’s DOGE has upset people enough to provoke legal action. Two weeks ago, a private class action alleged that his team sent emails to the federal workforce from the Office of Personnel Management (OPM) in a way that broke the E-Government Act of 2002 and was insecure.

Source:: Computer World

Europe boosts military AI as Mistral and Helsing form defence tech alliance

Home » Archive by Category "Technology" (Page 122)

By Thomas Macaulay European tech leaders Helsing and Mistral have formed a pact to build new military AI systems. The partnership brings together two of Europe’s top startups. Helsing, a defence tech firm based in Germany, was valued at €5bn last year. Founded in 2021, the company develops software for weapons, vehicles, and military strategy. Its systems have been deployed in battlefield simulations, fighter jets, and drones in Ukraine. Mistral, meanwhile, is widely considered Europe’s closest competitor to OpenAI. The French startup has also become a favourite of investors, raising at €600mn at a valuation of €5.8bn last year. The partners announced their…This story continues at The Next Web

Source:: The Next Web

How to Play Dress to Impress: 2025 Guide

Home » Archive by Category "Technology" (Page 122)

Latest NBA 2K Mobile Codes (February 2025)

Home » Archive by Category "Technology" (Page 122)

Research shows AI datasets have human values blind spots

Home » Archive by Category "Technology" (Page 122)

By The Conversation My colleagues and I at Purdue University have uncovered a significant imbalance in the human values embedded in AI systems. The systems were predominantly oriented toward information and utility values and less toward prosocial, well-being and civic values. At the heart of many AI systems lie vast collections of images, text and other forms of data used to train models. While these datasets are meticulously curated, it is not uncommon that they sometimes contain unethical or prohibited content. To ensure AI systems do not use harmful content when responding to users, researchers introduced a method called reinforcement learning from human…This story continues at The Next Web

Source:: The Next Web

Mistral releases its genAI assistant Le Chat for IOS and Android

Home » Archive by Category "Technology" (Page 122)

French AI company Mistral has released several updates to its generative AI assistant Le Chat and made it available on Android and iOS. Mistral describes the tool as a comprehensive genAI assistant for both life and work that can be used to access the latest news, plan daily tasks, keep track of projects, upload and summarize documents, and more.

Le Chat is accessed through a chat-like user interface and, according to Mistral, has the fastest inference model in the world. It is also reported to be significantly better at generating images than OpenAI ChatGPT. But Le Chat does not yet have a voice mode.

The AI assistant is available in both a free version and a new paid version that costs $15.49 per month. The paid subscription provides access to the company’s latest AI model, higher user limits, and the ability to opt out of sharing data with Mistral.

Enterprise users now have the option to deploy Le Chat in their own environment with custom models and a customized user interface. That is not yet possible with, for example, ChatGPT Enterprise or Claude Enterprise.

In November, Mistral rolled out a tool to automatically delete offending content.

Source:: Computer World

UK orders Apple to let it access everyone’s encrypted data

Home » Archive by Category "Technology" (Page 122)

In its limited wisdom, the deeply unpopular UK government has decided to break privacy for the entire world, slamming Apple with a top secret order that demands blanket access to personal data. Apple must create a “back door” to enable surveillance, according to The Washington Post. It’s a deeply dangerous, unaccountable, draconian demand that threatens privacy, free expression, commerce, and will ultimately make no one safe. 

What makes this even more insidious is the secrecy around the application of the law. Not only is Apple unable to either confirm or deny that it has been told to create this back door, but the UK Home Office will not do so either. Making this worse, while Apple can appeal the demand, it can only do so in a secret court and must deliver the demanded access even before that appeal is heard.

In other words, the government is demanding access to everybody’s encrypted iCloud backups, you don’t get told the government is doing it, there’s no right of appeal against it and, one more thing — it applies internationally. This would effectively give UK spies access to every iCloud backup that exists globally.

Apple might suspend some UK services

It is thought that Apple could withdraw some of its services from the UK market as a result, as it warned it might when the law was first articulated in 2023.  At that time, it called the measure a “serious, direct threat” to security and privacy. It also warned that the global nature of the regulation meant the company could not obey, even if it wanted to, because doing so would force the firm to break other rules, such as those surrounding data privacy.

“End-to-end encryption is a critical capability that protects the privacy of journalists, human rights activists, and diplomats. It also helps everyday citizens defend themselves from surveillance, identity theft, fraud, and data breaches,” the company said.

Even if Apple does withdraw some of its services from the UK, that may not be enough. That’s because the law demands global access, which means UK security agencies can, with few safeguards, demand access to data from anyone. The Post mentioned Advanced Data Protection on iCloud as one service Apple might stop offering to the market, but the regulation seems to imply that if you are a US citizen, the UK (for some insane reason) can still demand access to your encrypted iCloud data.

Sheer and utter folly

I can’t articulate strongly enough how insanely foolish this is; even the FBI agrees encryption is a good thing.

As I’ve argued forever, and as state-sponsored surveillance attacks such as those by the NSO Group should prove, there really is no such thing as a secure back door. Once any such opening exists, it will proliferate. Apple will be forced to share these keys with governments on a global basis, including less trustworthy or unstable regimes, or those willing to support privatized surveillance-as-a-service firms. 

That means it is only a matter of time before all your information becomes an open book to rogue governments, state-sponsored attackers, criminals, and anyone else with a desire to profit from your digital data. 

That’s a threat to you, to free speech and democracy, and also a massive attack against the privacy and security essential to maintain digital commerce. Far from making people safer, the UK demand threatens everyone. More to the point, if the deep state is smashing down iCloud’s doors, it will be smashing down digital doorways everywhere. “Breaking encryption for one breaks encryption for all,” warns Privacy International.

Draconian, unprecedented, unaccountable, dangerous

Needless to say, those who understand the importance of privacy, encryption, and the internet, are furious at the UK government’s demand. 

Rebecca Vincent, the interim director of privacy and civil liberties campaign group Big Brother Watch, said: “We are extremely troubled by reports that the UK government has ordered Apple to create a backdoor that would effectively break encryption for millions of users — an unprecedented attack on privacy rights that has no place in any democracy. 

“Big Brother Watch has been ringing alarm bells about the possibility of precisely this scenario since the adoption of the Investigatory Powers Bill in 2016. We all want the government to be able to effectively tackle crime and terrorism, but breaking encryption will not make us safer. Instead, it will erode the fundamental rights and civil liberties of the entire population — and it will not stop with Apple.

“We urge the UK government to immediately rescind this draconian order and cease attempts to employ mass surveillance in lieu of the targeted powers already at their disposal.”

“In doing this, the government [is] attempting to undermine the security of millions of users, which would expose them to higher risks of cybercrime,” said James Baker, platform power program Manager at Open Rights Group. “They are failing in their primary duty to protect British citizens. The government want[s] to be able to access anything and everything, anywhere, any time. Their ambition to undermine basic security is frightening, unaccountable and would make everyone less safe. WhatsApp and other services will be next in their sights.

“They seek to do this in secret, with minimal accountability, and potentially global impacts,” he said. “It is straightforward bullying.”

Index on Censorship warned: “Our message to the UK government: please don’t trade in our privacy under the misguided belief it’ll tackle crime. Encryption is essential to privacy and the right to privacy and free expression go hand-in-hand. They should be protected not eroded.”

“There are plenty of other, better ways to catch those involved in criminal activity than this,” wrote Jemima Steinfeld, CEO of Index on Censorship. “All this will do is make the average person in the UK much less safe online and give a green light to autocratic states to follow-suit.”

This must be opposed 

I’m horrified and appalled at the move. I consider it a shameful threat to all forms of digital civil liberty and warn that it will create far more harm than it will resolve. Ultimately, privacy is a human right, not a feature, and the removal of such rights should at least be a matter of public and democratic debate, which it has not been. As it stands, this UK overreach should be opposed not only by civil rights advocates, but by anyone else who uses — or provides — online services of any kind, and certainly by any nation that does protect privacy among its citizens.

The UK must think again or become a digital pariah on the world stage. 

You can follow me on social media! Join me on BlueSky,  LinkedIn, Mastodon, and MeWe. 

Source:: Computer World

Ethical AI and climate tech are turning the Netherlands into a European innovation leader

Home » Archive by Category "Technology" (Page 122)

By Victor Dey Long admired for its progressive policies and open economy, the Netherlands is making an aggressive play to become Europe’s next tech powerhouse. By blending AI with sustainability and a strong ethical framework, the country attracted $2.5bn in tech investments in 2024 alone — a 39% surge from the previous year. With a government-backed push for responsible innovation, the Netherlands is positioning itself as the epicentre of Europe’s next tech renaissance.  According to VC firm Atomico, the country has become one of Europe’s fastest-growing tech ecosystems. Europe’s leading stock exchange by market cap, Euronext Amsterdam, has become a cornerstone of the…This story continues at The Next Web

Source:: The Next Web

Latest Monopoly Go Reward Links (February 2025)

Home » Archive by Category "Technology" (Page 122)

‘Sorry, I didn’t get that’: AI misunderstands some people’s words more than others

Home » Archive by Category "Technology" (Page 122)

By The Conversation The idea of a humanlike artificial intelligence assistant that you can speak with has been alive in many people’s imaginations since the release of “Her,” Spike Jonze’s 2013 film about a man who falls in love with a Siri-like AI named Samantha. Over the course of the film, the protagonist grapples with the ways in which Samantha, real as she may seem, is not and never will be human. Twelve years on, this is no longer the stuff of science fiction. Generative AI tools like ChatGPT and digital assistants like Apple’s Siri and Amazon’s Alexa help people get driving directions,…This story continues at The Next Web

Source:: The Next Web

REGISTER NOW FOR YOUR PASS
 
To ensure attendees get the full benefit of an intimate technology expo,
we are only offering a limited number of passes.
 
Get My Pass Now!