By Pranob Mehrotra The Xiaomi Watch 5 takes smartwatch interaction beyond simple wrist flicks, adding advanced, customizable gestures that make hands-free control more powerful than ever.
The post This WearOS watch pushes the boundaries of gesture control and Apple should take note appeared first on Digital Trends.
Source:: Digital Trends
By Paulo Vargas Motorola’s new moto buds 2 plus feature Bose tuning and AI tools like meeting summaries, starting at €79 or $92.70 at MWC 2026.
The post Motorola’s new earbuds pack Bose sound and AI smarts appeared first on Digital Trends.
Source:: Digital Trends
By Deepti Pathak To mark Holi 2026, OPPO India has introduced festive offers on its Reno Series and Find X9 models….
The post OPPO Celebrates Holi 2026 with Special Upgrade Deals on Reno Series & Find X9 appeared first on Fossbytes.
Source:: Fossbytes
By Varun Mirchandani Claude surpasses ChatGPT in App Store downloads as Pentagon controversy sparks public debate and shifts AI app rankings.
The post Claude just beat ChatGPT on the App Store, and the reason is surprising appeared first on Digital Trends.
Source:: Digital Trends
By Varun Mirchandani The “Cancel ChatGPT” trend is growing after OpenAI’s Pentagon deal sparked backlash over military AI and surveillance concerns.
The post What you should know about the Cancel ChatGPT trend and whether it crossed a red line appeared first on Digital Trends.
Source:: Digital Trends
By Hisan Kidwai After both vivo and OPPO played around with their Pro flagships and made people rethink what…
The post Xiaomi 17 Ultra Launched With 1-Inch LOFIC Camera and 200MP Leica Zoom appeared first on Fossbytes.
Source:: Fossbytes
In recent weeks, AI giant Anthropic has been locked in a high‑stakes confrontation with the Trump administration’s Department of Defense (DoD) over new standard terms the Pentagon wants to impose on AI vendors. Defense Secretary Pete Hegseth had demanded contract language that would give the military “any lawful use” of Anthropic’s models, effectively stripping out the company’s long‑standing limits on certain battlefield and domestic applications.
Lawful, in Hegseth’s mind, means the DoD could do practically whatever it wanted, up to and including domestic mass surveillance and AI-controlled weapons.
If that sounds like the premise for how a war between Terminators and humans might begin, you’re not the only one to think so. Caution, however, is not a word Hegseth seems to know. But Anthropic CEO Dario Amodei, however, is well aware of the real-world risks of AI —and not just the ones torn from science-fiction horror movies.
Be that as it may, Hegseth summoned Amodei and demanded that Anthropic AI be used any way he wants or said he’d cancel the company’s existing $200 million contract and blacklist them from any further AI pacts. Hegseth gave Anthropic until 5 p.m. yesterday to bend the knee.
Amodel didn’t bend.
He publicly stated the company would rather walk away from work with the DoD than drop contractual safeguards meant to keep its AI from being used for mass surveillance of Americans or for fully autonomous weapons.
It’s not that he objects to using AI to defend the US. Amodei favors that. But, “using these systems for mass domestic surveillance is incompatible with democratic values,” he said. “AI-driven mass surveillance presents serious, novel risks to our fundamental liberties.”
In addition, “frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of [Defense] on R&D to improve the reliability of these systems, but they have not accepted this offer.”
Oh, and by the way, Amodei said those use cases “have never been included in our contracts with the Department of [Defense], and we believe they should not be included now.”
The Pentagon kept the pressure on, describing the strong-arm tactics as “my way or the highway” and told Anthropic to pitch their “final offer” yesterday, Still, Anthropic rejected the DoD proposal, saying it “cannot, in good conscience,” agree to these overbroad terms.
It’s not, by the way, that Anthropic is some woke, liberal company as it’s now being painted in some pro-Trump circles. Far from it! As the National Review pointed out, “Amodei is just about the opposite of a dove when it comes to military applications of AI. For example, Anthropic’s Claude was used by the Trump administration to capture former Venezuelan President Nicolás Maduro in January.
Anthropic’s stance against using AI for domestic surveillance and self-guided weapon systems is less about political ideology and more about a rational realization of the dangers of trusting early-stage, unfettered AI.
Civil liberties groups, including the Electronic Frontier Foundation (EFF), have urged Anthropic to hold the line. They’re casting the Pentagon’s push as an attempt to bully tech firms into building tools for bulk spying and automated warfare. Within Anthropic, employees have posted public messages backing leadership’s stance. They describe the showdown as a visible test of the company’s founding commitment to steer frontier AI away from the most destabilizing military uses.
These workers are not alone in supporting Anthropic’s stance. Alphabet, Amazon, and Microsoft employees announced they were behind Anthropic. Simultaneously, hundreds of Google and OpenAI employees signed an open letter calling on their companies to maintain Anthropic’s red lines against mass surveillance and fully automated weaponry. They said they “hope our leaders will stand together” to reject the current Pentagon terms.
Donald Trump, on the other hand, late yesterday threw a fit. “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY. Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology.“
Government agencies now have six months to transition to alternative tools.
Some people on the right politically were behind Anthropic’s position. Retired General Jack Shanahan, for instance, who was in the middle of an earlier military-vs.-AI conflict between Project Maven and Google, did not take Trump’s side. He wrote: “Despite the hype, frontier models are not ready for prime time in national security settings. Over-reliance on them at this stage is a recipe for catastrophe. Mass surveillance of US citizens? No thanks.”
None of this stopped other AI companies from flirting with the Defense Department. In an internal memo, OpenAI CEO Sam Altman wrote: “We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.”
He went on to say OpenAI was still open to making “a deal with the DoW that allows our models to be deployed in classified environments.” That sounded like classic waffling to me, and sure enough last night, OpenAI agreed to work with Defense Department.
Let’s face it OpenAI has a bottomless need for revenue to cover its endless capital expenses, so the execs were willing to make a deal with the devil. (Yeah, yeah, I know Altman talked about guardrails and protections. One word for you: hallucinations.)
Sadly, if OpenAI hadn’t made that deal, someone else surely would have. So, if in 2028, AI-driven autonomous drones drop bombs on suspected illegal foreigners’ homes in Minneapolis or anywhere else in the world, we’ll know who to blame — much good that will do us then.
This insane adoption of out-of-control AI for military purposes must be stopped now lest the Terminator wars become fact rather than science fiction.
Source:: Computer World
By Deepti Pathak Following the company’s expansion of its X-series lineup in the past few months, Vivo is now…
The post Vivo X300 FE Moves Closer to Debut; X300 Ultra Could Get Dual Teleconverter Kit appeared first on Fossbytes.
Source:: Fossbytes
The Trump administration on Friday moved to ban the use of products from artificial intelligence company Anthropic by federal businesses, escalating a high-stakes clash over whether private AI makers can limit how the US military uses their systems. Just hours later, Anthropic rival OpenAI’s CEO, Sam Altman, announced that his company had reached a deal to supply the Pentagon with its technology, ostensibly under the same terms that the military had rejected for Anthropic.
Calling Anthropic “Leftwing nut jobs,” President Donald Trump said in a Truth Social post that he was directing “EVERY Federal Agency” to stop using Anthropic’s technology immediately. At the same time, the Pentagon prepared to designate the company a “supply chain risk,” a label more commonly associated with foreign adversaries’ tech products, such as telecom gear made by China’s Huawei.
The decision follows an unusually public dispute between Anthropic and Defense Secretary Pete Hegseth over what the Pentagon called an “all lawful purposes” requirement, which means that once the military licenses an AI model, it must be free to deploy it for any lawful mission without being constrained by vendor-imposed safety policies.
On X, Defense Secretary Pete Hegseth echoed Trump’s criticism, saying “Cloaked in the sanctimonious rhetoric of ‘effective altruism,’ [Anthropic and CEO Dario Amodei] have attempted to strong-arm the United States military into submission – a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.” He added, “Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable.”
In a late-night statement, Anthropic responded to the Pentagon, saying, “We have not yet received direct communication from the Department of War or the White House on the status of our negotiations.” It also said it believes the designation of supply chain risk “would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.”
A six-month clock and a scramble to replace Claude
Under the plan, according to Axios, the Defense Department would sever a contract, worth up to $200 million, with Anthropic, and require defense contractors and other vendors to certify they are not using Anthropic’s Claude model in work tied to the Pentagon. The administration is allowing a six-month window to give agencies and contractors time to transition to alternatives.
That transition could be particularly disruptive, because Claude has been used in the military’s classified systems, systems that support some of the Pentagon’s most sensitive intelligence work, weapons development, and operational planning.
Defense officials have described Claude as highly capable, and acknowledged that disentangling it from existing workflows would be difficult.
What the administration says it is fighting over
Anthropic argues that certain uses, especially mass domestic surveillance and fully autonomous weapons, should remain out of bounds.
CEO Dario Amodei said in an impassioned essay that the company cannot remove those guardrails “in good conscience,” warning that current AI systems are not reliable enough for fully autonomous lethal decision-making, and that large-scale surveillance carries significant risks of abuse.
The Pentagon argues that the military already operates under its own rules and oversight, and cannot have mission decisions constrained by a vendor’s terms of service, particularly in gray areas where definitions of “surveillance” and “autonomy” can be contested.
What it could mean for US national security
In the near term, the administration’s move forces the Pentagon to manage a delicate transition: removing Anthropic’s model from classified environments while maintaining continuity for intelligence analysis and planning tasks that had begun to incorporate generative AI.
The longer-term implications are broader. The ban signals that access to the federal market, particularly defense, may depend on accepting “all lawful use” terms, potentially reducing the leverage of AI companies that try to impose hard red lines on certain national security applications.
It also raises practical questions for AI companies as government vendors. If the government pushes one leading AI provider out of sensitive systems, agencies and contractors may consolidate around a smaller number of alternatives, increasing dependence on whichever firms remain willing and able to operate in classified environments.
These dislocations in critical military infrastructure could further pose a national security threat, some argue. US Sen. Mark R. Warner (D-VA), vice chairman of the Senate Intelligence Committee, said the efforts by Trump and Secretary Hegseth pose a national security risk. “The president’s directive to halt the use of a leading American AI company across the federal government, combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations.”
Competitors could move in: Grok, OpenAI, and Google
The decision could reshape the competitive landscape.
Elon Musk’s xAI has already signed an agreement to bring its Grok model into classified military systems, in a development that positioned xAI as a potential replacement if Anthropic’s relationship with the Pentagon collapsed.
However, significant concerns about Grok’s safety and reliability have surfaced within parts of the federal government, even as the Pentagon approved it for classified settings, an early indication that “replacement” won’t be a simple matter of switching one model for another.
Meanwhile, the Pentagon has been in discussions with OpenAI and Google about expanding their models’ availability from unclassified systems into more sensitive environments, Axios reported. The discussions with OpenAi apparently bore fruit, given that less than seven hours after Trump’s Truth Social post, OpenAI’s Altman posted on Twitter, “we reached an agreement with the Department of War to deploy our models in their classified network.”
In an apparent about-face, however, the Pentagon appeared to accept from OpenAI the same terms it rejected for Anthropic. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
OpenAI CEO Sam Altman has also sought to position his company as aligned with Anthropic’s core ethical objections, while still pursuing Pentagon business. Altman had said OpenAI shares “red lines” against mass surveillance of Americans and weapons that can fire without human oversight, even as it explores a path to work with the Defense Department.
Political and industry backlash begins to surface
Even among competitors, the Anthropic fight produced unusual sympathy.
Hundreds of employees at Google and OpenAI backed Anthropic in a petition, underscoring internal tensions across the AI industry over military applications. One factor that could derail the ban on Anthropic is unified AI sector rejection.
Peter Madsen, former professor of ethics and social responsibility at Carnegie Mellon University and executive director of the Center for the Advancement of Applied Ethics and Political Philosophy, said in an interview, “Every other AI company should commit to the same ideals as Anthropic so that Trump will have to use an ethical AI firm, not one that will cower to his whims.”
Anthropic has said it would cooperate with a transition to avoid disruption to ongoing missions, while noting it had not yet said whether it would challenge the “supply chain risk” designation in court.
What happens next
The administration’s decision sets up several immediate test cases.
First, agencies and contractors must determine how deeply Anthropic’s tools are embedded in their operations and how quickly they can migrate without degrading performance or security.
Second, rivals will face their own balancing act: how to satisfy Pentagon demands for “all lawful use,” or in the case of OpenAI, walk the fine line of its safety principles, while managing internal and external scrutiny over surveillance, autonomy, and the risk of AI systems behaving unpredictably in high-stakes settings.
Finally, the ban raises a fundamental policy question that goes beyond Anthropic: in the race to deploy frontier AI for national security, who sets the boundaries: the government that needs operational flexibility, or the private companies that build and control access to the technology?
This article has been updated from its original version to reflect the late night announcement by OpenAI that it had reached a deal with the Pentagon.
This article originally appeared on CIO.com.
Source:: Computer World
Global PC and smartphone sales are expected to fall by more than 10% this year, according to analysts, as hyperscaler investment in AI data centers fuels a memory shortage.
PC shipments will fall 10.4% during 2026 compared to 2025, as the constrained memory supply leads to higher prices, according to Gartner. IDC predicts a slightly larger 11.3% decline over the same period.
The smartphone market will also see significant year-on-year shipment declines in 2026 — down 8.4% according to Gartner, or 12.9%, according to IDC.
“The current situation is now more negative than even our most pessimistic scenarios suggested just a few months ago,” IDC said in a blog post Thursday. The analyst firm in December had previously forecast a worst-cast 8.9% drop in PC shipments.
“The speed at which the memory pricing has increased has shocked everybody,” said said Ranjit Atwal, research director at Gartner, with an expected 130% year-on-year rise in 2026. “This is a demand-side issue. The demand that’s available is all going to hyperscalers; the PC guys and the smartphone guys are getting squeezed.”
For PC vendors, increased memory costs will account for 23% of the total bill-of-materials cost this year, according to Gartner, up from 16% in 2025. This will feed through to PC prices, which are expected to rise by 17% in 2026, the researcher predicted.
Large PC makers are more equipped to weather the storm, analysts said, but will still be affected. HP said in its first quarter earnings call that memory now accounts for 35% of the costs to build a PC, up from between 15% and 18% the previous quarter.
The situation is more dire for smaller vendors and those already operating on wafer-thin margins. “Consolidation isn’t off the map here,” said Atwal. “It’s survival of the fittest as much as anything.”
For enterprise buyers, higher prices are likely to lengthen PC refresh cycles, increasing by 15% during 2026, according to Gartner.
Enterprise buyers are now negotiating with vendors in a fast-changing market. “They’re trying to work out what is a good price at this moment,” said Atwal. “Vendors aren’t guaranteeing prices for long now, they’re saying this is the price and it’s available for two or three weeks.”
Budget contraints mean that some PC purchases will be offset, said Atwal. For businesses that moved to Windows 11 on existing devices last year, that could be problematic. “That then causes issues as…Microsoft will no doubt be bringing new Windows 11 capabilities, and you may not have the hardware capabilities [to] run some of that.”
Businesses will continue to invest in AI PCs, said Atwal, but at a slower rate, and are likely to purchase devices with reduced memory.
The disruption is expected to continue for the foreseeable future. “Price is not only increasing in the short term…, it’s going to remain high almost through to the end of 2027,” said Atwal, pointing to structural changes in the market. “We’re advising to buy now, or wait [until prices stabilize again], because whatever you’re getting at the moment is going to be the best price.”
Source:: Computer World
Perplexity says its new Perplexity Computer service can perform complex, multi-step tasks on behalf of human users, by organizing the tasks that are needed and creating the software agents required to fulfill the process.
Users begin by describing their desired outcome, the company said, then, “Perplexity Computer breaks it into tasks and subtasks, creating sub-agents for execution. The sub-agents might do web research, document generation, data processing, or API calls to your connected services. A document is drafted by one agent while another gathers the data it needs.”
Perplexity Computer draws on a variety of AI resources for different tasks. “Models are specializing. Each frontier model excels at different kinds of work, so a full workflow must have access to them all and deploy them intelligently,” the company said. “Perplexity Computer runs Opus 4.6 for its core reasoning engine and orchestrates sub-agents with the best models for specific tasks: Gemini for deep research (creating sub-agents), Nano Banana for images, Veo 3.1 for video, Grok for speed in lightweight tasks, and ChatGPT 5.2 for long-context recall and wide search.”
Perplexity Computer is available now for subscribers to the $200/month Perplexity Max plan, and will soon be available to users on the $325/month Enterprise Max plan.
Source:: Computer World
By Pranob Mehrotra Honor teases its ultra-thin Silicon-Carbon Blade Battery, said to offer more power than the 6,600mAh pack in the upcoming Magic V6.
The post Honor teases its next-gen silicon-carbon battery that’s as thin as a playing card appeared first on Digital Trends.
Source:: Digital Trends
By Pranob Mehrotra Ultrahuman has announced its latest flagship smart ring, the Ring Pro, and it aims to solve one of the biggest pain points of using a smart ring.
The post Ultrahuman’s new Ring Pro aims to end your smart ring battery worries appeared first on Digital Trends.
Source:: Digital Trends
By Paulo Vargas University of Washington researchers built a smartphone app that tracks fetal heart rate as accurately as clinic tools using only the phone’s speaker and mic, though it’s not ready for release yet.
The post This app turns your smartphone into a fetal heart rate monitor appeared first on Digital Trends.
Source:: Digital Trends
By Deepti Pathak The upcoming BGMI 4.3 update will introduce a new racing-style feature to classic matches, adding an action-packed…
The post BGMI 4.3 Update Brings Drag-Style Racing Checkpoints in Classic Mode appeared first on Fossbytes.
Source:: Fossbytes
In an impressive and unique industry first that reflects the work Apple has done on mobile device security since the first iPhone arrived almost 20 years ago, the North Atlantic Treaty Organization (NATO) says iPhones and iPads running iOS 26 are secure enough to handle classified information in NATO-restricted environments — pretty much out-of-the-box.
That’s going to mean a great deal to military planners at the organization, who will now be much happier to use Apple’s devices to handle classified information up to NATO restricted level without any additional software or settings. This means the iPhone and iPad are the first (and only) consumer devices to have met the agency’s compliance standards.
It also means that, in general terms, the iPhone in our pocket is now seen as being sufficiently secure to handle some of the most classified information you can get — and if you regularly use your device to handle anything of greater importance, you can use Lockdown Mode.
NATO’s approval extends to handling that kind of information using standard Apple apps, including Mail, Calendar, and Contacts data.
What Apple said
“This achievement recognizes that Apple has transformed how security is traditionally delivered,” said Ivan Krstić, Apple’s vice president of security engineering and architecture. “Prior to iPhone, secure devices were only available to sophisticated government and enterprise organizations after a massive investment in bespoke security solutions.
“Instead, Apple has built the most secure devices in the world for all its users, and those same protections are now uniquely certified under assurance requirements for NATO nations — unlike any other device in the industry.”
There are two caveats to recognize. The first is that NATO does require that devices handling this sort of data in these environments be managed devices implementing relevant policy controls on use; the second is that you absolutely need to have your devices protected by passcodes and/or biometric (Face/Touch) ID.
The ramifications for enterprise users is significant. It implies that so long as you have effective policies in place (so no one uses an iPhone to take pictures of the confidential blueprints they then share with a competitor, for example), the device you get out the box is likely secure.
Security is in Apple’s DNA
The NATO approval builds on an earlier security success for the company: the devices were approved to handle classified German government data on hardware using native iOS and iPadOS security measures after an extensive evaluation by the Federal Office for Information Security (the Bundesamt für Sicherheit in der Informationstechnik, or BSI).
As part of that effort, BSI conducted a comprehensive series of assessments and tests, including deep security analysis, to make sure that the security capabilities Apple has already put in place were secure enough. This also led to the approval of these systems by NATO’s 32-member states.
“Secure digital transformation is only successful if information security is considered from the beginning in the development of mobile products,” said Claudia Plattner, BSI’s president. “Expanding on BSI’s rigorous audit of iOS and iPadOS platform and device security for use in classified German information environments, we are pleased to confirm the compliance under NATO nations’ assurance requirements.”
Security is, of course, in Apple’s DNA, which is why it designs it in at the core of its products. As proof, Apple can point to years of work on security, during which it has been led by the idea that security protections should be focused on users, deeply integrated, and available across its ecosystem.
That work led, for example, to the invention of the Secure Enclave on Apple processors, which does a much to ensure device security. (That’s also why everyone using one of these devices should ensure they use a super-tough password and enable biometric ID.) In truth, Apple device security rests on a complex web of layered, integrated protections, from Secure Boot to Memory Integrity Enforcement (now also on M5 Macs) and beyond.
In more general terms, this means that any user, even those who aren’t relying on managed devices and don’t work for NATO, can expect high security for the data on their device. That’s the case as long as they only use apps distributed by the App Store, refuse to use random configuration profiles downloaded for whatever reason from the ‘net, have device protection enabled, and use a tough-to-guess passcode.
More details about Apple’s security protections are available in the Apple Platform Security guide.
Please follow me on Twitter, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe. Also, now on Mastodon.
Source:: Computer World
ServiceNow plans to unleash the first member of its Autonomous Workforce, the Level 1 Service Desk AI specialist, next quarter.
The agent will autonomously diagnose and resolve common IT support requests such as password resets, provisioning of software access, and network troubleshooting. It will base its actions on information from enterprise knowledge bases, historical incident data, and defined workflows, and will be available 24/7, freeing humans to work on more strategic tasks as the agent executes mundane tasks with the scope, authority, and governance required for enterprise work, the company said.
ServiceNow is already using the agent internally, and claims that it is handling more than 90% of employee requests, and is almost twice as fast as human agents in performing these tasks, while still maintaining the necessary business context and governance required by an enterprise.
ServiceNow AI specialists like the Level 1 Service Desk agent are designed to work alongside humans, operating within a clearly defined scope governed by the same permissions that a human agent in that role would have.
“AI specialists, by default, cannot exceed their authority nor self-escalate permissions in memory based on the outcomes of reasoning that occurred during the first step of the AI powered decision and execution flow,” said John Aisien, SVP central product management at ServiceNow, during a media briefing. “Instead, these AI specialists ground decisions in live enterprise data, drawing in real time information about assets, access, ownership, real time permissions, and previous resolution patterns through our enterprise data foundation and our context graph.”
By combining probabilistic intelligence with deterministic workflow orchestration, ServiceNow said, the AI specialists can interpret requests, use business context to determine the right action to take, and execute that action while being overseen by ServiceNow’s AI Control Tower. They then notify the affected employee and update the knowledge base. And if they can’t resolve the issue, they pass it on to a Level 2 or Level 3 human agent for further investigation.
This is different to the historical approach. For the last two years, said Greyhound Research Chief Analyst Sanchit Vir Gogia, most vendors have competed on interface intelligence, with copilots summarizing, suggesting, and predicting. But, he said, “that phase is now saturated. What enterprises are evaluating in 2026 is whether AI can operate as a governed execution layer inside production workflows. Autonomous Workforce signals that ServiceNow understands this shift.”
This, he said, is architecturally meaningful: “AI … is being structured as a delegated participant in defined job roles. That changes accountability,” he said. “This is why ServiceNow’s emphasis on deterministic workflow orchestration is strategically aligned with enterprise demand. Models are probabilistic by design. Enterprises require outcomes that are predictable, auditable, and bound by policy.”
ServiceNow, however, didn’t say who would be accountable if one of its AI specialists went off the rails.
EmployeeWorks
ServiceNow also announced EmployeeWorks, available today, which it calls “a conversational front door to the enterprise.” It works as a personal assistant, pulling together conversational AI and enterprise search from Moveworks, which ServiceNow recently acquired, and from ServiceNow’s own unified portal and autonomous workflows, said Bhavin Shah, founding CEO of Moveworks and now general manager for Moveworks and AI at ServiceNow.
“Employees don’t need to know what agent to invoke, or where to go, or ask ‘should I use this system or that system?’” he said. “It just works.” The service supports protocols such as MCP and A2A to enable a “secure, scalable coordination between agents and business systems,” he said.
EmployeeWorks understands organizational structure, approvals, and authorization so it can execute tasks that require multi-system coordination, ServiceNow said, yet it can still maintain governance and audit trails. It can, for example, pull information from a document in SharePoint, then reference a Slack thread and pull together the information to create an action, or it can route and handle approvals, orchestrate workflows, or update systems, all while following enterprise policies.
Shah said EmployeeWorks is vendor-agnostic, can answer employee questions without them needing to switch to a different tool, and provides out-of-the-box integration and enterprise search.
Reservations about automations
Analysts approved of ServiceNow’s overall direction, but have reservations about the announcements.
Moveworks’ built-in governance mechanisms sound “amazing,” said Info-Tech Research Group Advisory Fellow Scott Bickley, but implementing EmployeeWorks will need considerable groundwork, including documenting workflows, updating knowledge bases, cleaning data and defining approval paths, with limitations and exceptions in place to cover all possibilities.
Gogia agreed. “ServiceNow is moving in the right direction because it is anchoring AI inside workflow control,” he said. “However, correctness of direction does not guarantee maturity of execution. The credibility of this strategy will be measured in regulated, exception-heavy, cross-system environments, not in idealized service desk queues.”
Moor Insights & Strategy Principal Analyst Melody Brue said, “the concern is that AI agents could become a new layer that routes around many of the apps people use today. ServiceNow aims to sit above that, coordinating agents and workflows across systems rather than just being another tool they might end up replacing.”
It’s no longer enough for AI to drive incremental efficiency, she said. Now, “it must help unlock value trapped in enterprise data and workflows. By tying AI into systems of record and orchestrated workflows, ServiceNow aims to move from static reports to agents that act on insights.”
Gogia takes it as a given that enterprises will adopt autonomous AI. The key question, he said, is whether they can govern it without destabilizing operational trust.
Another concern, said Bickley, is how enterprises will pay for it all. SaaS vendors each charge for AI services using their own variety of usage-based “AI credits”, but it’s difficult to accurately model and predict consumption of AI credits in a way that permits accurate budget forecasting, he said.
“There needs to be a clear path for legacy seat subscriptions to be migrated into AI credits,” Bickley said. “CFOs will not tolerate a variable pricing model that destroys budget predictability, and this pain point seems to go unaddressed by ServiceNow, and for that matter, the broader SaaS ecosystem as they double down on their aggressive AI launch initiatives.”
This article first appeared on CIO.com.
Source:: Computer World
By Hisan Kidwai ASUS gaming laptops have been the cream of the crop for some time now, as evidenced…
The post ASUS 2026 Creator Series Launched: ProArt GoPro Edition, ROG Flow Z13-KJP, TUF A14 appeared first on Fossbytes.
Source:: Fossbytes
By Deepti Pathak Instagram is widely used for sharing photos, videos, and Reels. Many people check others’ profiles out…
The post Can You See Who Views Your Instagram Profile? appeared first on Fossbytes.
Source:: Fossbytes
The US government has ordered its diplomats to actively oppose other countries’ attempts to introduce so-called data sovereignty laws that restrict how and where foreign technology companies can store and handle citizens’ data, according to Reuters.
In an internal memo from Secretary of State Marco Rubio, the US describes such rules as a threat to free data flows, AI development, and cloud services. The Trump Administration believes that data localization could increase costs, create cybersecurity risks, and give governments greater control over information.
At the same time, support for data sovereignty is growing, especially in Europe, where there are concerns about privacy, surveillance, and US dominance in AI and tech. The EU’s GDPR is mentioned in the document as an example of rules that the US considers unnecessarily restrictive.
Diplomats have now been tasked with monitoring and influencing international proposals that restrict cross-border data flows, as well as promoting alternative frameworks that support the free transfer of data between countries.
Source:: Computer World
Click Here to View the Upcoming Event Calendar