By Deepti Pathak If you’re serious about trading or holding high units in Anime Adventures, you need to have…
The post Anime Adventures Value List (May 2025) appeared first on Fossbytes.
Source:: Fossbytes
By Nick Godt A weekly recap of the revolutionary technology powering, connecting, and now driving next-gen electric vehicles.
Source:: Digital Trends
By Megan Carnegie Sophie Rucker had been living and working in London for five years when a trip to a yoga training school in Bali presented her with an alternative to the rat race. Despite enjoying life in London, witnessing digital nomads balance work with sun, sea, and relaxed vibes in the Indonesian island province prompted her to pursue more freelance work. At the start of 2020, having set herself up as a communications strategist for NGOs and social impact organisations, Sophie quit her permanent role and moved to Bali. Despite the uncertainty of the progressing pandemic, she found the space she needed…This story continues at The Next Web
Source:: The Next Web
Windows, as all but the most besotted Microsoft fans know, has historically been a security disaster. Seriously, what other program has a dedicated day each month to reveal its latest security holes?
But now, Windows Recall, the AI-powered “feature” that continuously takes snapshots of your screen to create a searchable timeline of everything you do, has arrived for Copilot+ PCs running Windows 11 version 24H2 and newer.
After a year of controversy and multiple delays prompted by widespread privacy and security concerns, Microsoft has significantly changed Recall’s architecture. The feature is now opt-in, requires Windows Hello biometric authentication, encrypts all snapshots locally, filters out sensitive data such as credit card numbers, and allows users to filter out specific apps or websites from being captured.
I am so unimpressed. A few days ago, in the latest Patch Tuesday release, Microsoft revealed five — count ’em, five! — zero-day security holes in Windows alone. Do you expect me to trust Recall with a track record like this?
Besides, even if I don’t enable the feature, what if our beloved federal government decides that for our protection, it would be better if Microsoft turned on Recall for some users? After all, it’s almost impossible to run Windows these days without having a Microsoft ID, making it easy to pick and choose who gets what “update.”
Other people feel the same way. Recall remains a lightning rod for criticism. Privacy advocates and security experts continue to warn that the very nature of Recall capturing and storing everything displayed on a user’s screen every few seconds is inherently too risky. Even if you don’t use the feature yourself, what about all the people you communicate with who might have Recall turned on? How could you even know?
A friend at the University of Pennsylvania told me that the school has examined Microsoft Recall and found that it “introduces substantial and unacceptable security, legality, and privacy challenges.” Sounds about right to me.
Amusingly enough, Kaspersky, the Russian security company that has its own security issues, also states that you should avoid Recall. Why? Well, yes, when you first activate Recall, you are required to use biometric authentication. After that, your PIN will do nicely. Oh, and its automatic filtering of sensitive data is unreliable. Sure, it will stop taking snapshots when you’re in private mode on Chrome or Edge. Vivaldi? Not so much.
And as Kaspersky points out, if you use videoconferencing with automatic transcription enabled, Recall will save a complete call transcript detailing who said what. Oh boy!
Signal, the popular secure messaging program (well, secure when you use it correctly — unlike, say, the US Secretary of Defense), wants nothing to do with this. It has introduced a new “Screen security” setting in its Windows desktop app, specifically designed to protect its users from Recall.
Enabled by default on Windows 11, this feature uses a Digital Rights Management (DRM) flag to stop any application, including Windows Recall, from capturing screenshots of Signal chats. When Recall or other screenshot tools try to capture Signal’s window, it will produce a blank image.
Why? In a blog post, Signal explained:
“Although Microsoft made several adjustments over the past twelve months in response to critical feedback, the revamped version of Recall still places any content that’s displayed within privacy-preserving apps like Signal at risk. As a result, we are enabling an extra layer of protection by default on Windows 11 in order to help maintain the security of Signal Desktop on that platform, even though it introduces some usability trade-offs. Microsoft has simply given us no other option.”
Actually, you do have another option: Desktop Linux. I said it ages ago, and I’ll say it again now. If you really care about security on your desktop, you want Linux.
Source:: Computer World
By Thomas Macaulay The collapse of Builder.ai exposes the growing threat of “FOMO investing,” according to an expert in tech growth intelligence. Builder had become one of Britain’s best-funded startups, but is now filing for bankruptcy due to financial problems. The insolvency comes after enormous sums were invested into the business. Big-name backers including Microsoft and Qatar’s sovereign wealth fund had poured a total of over $500mn into the startup, which aimed to simplify software development with AI. The funding gave Builder a coveted unicorn status, with a valuation exceeding $1.3bn. But the eye-watering sums couldn’t keep the business afloat. Builder blamed the…This story continues at The Next Web
Source:: The Next Web
By Deepti Pathak HP has introduced the OmniStudio X All-in-One (AIO) PC in India. It is a stylish 32-inch…
The post HP Launches OmniStudio X All-in-One (AIO) in India appeared first on Fossbytes.
Source:: Fossbytes
Google (Nasdaq:GOOG) is implementing AI-driven enhancements to its software and collaboration tools that could translate to big productivity gains.
Google CEO Sundar Pichai started his keynote at the Google I/O developer conference this week with one of those features: real-time language translation in Google Meet, which was until now in research.
The feature uses Google’s AI technology to translate speech from one language to another in near real time, while matching the tone and expressions — such as “hmmm” — during delivery. The technology breaks down language barriers, Pichai said.
A video demonstrated the translation from spoken English to Spanish. A computer-generated voiceover spoke the translation after a one-second delay. Then the other participant’s response was translated from Spanish to English.
“We are even closer to having a natural and free-flowing conversation across languages,” Pichai said.
Translations to English and Spanish are now available in beta for Google AI Pro and Ultra subscribers, with more languages rolling out in the next few weeks.
“Real-time translations will be coming to enterprises later this year,” Pichai said.
Pichai, in the opening moments of his keynote, also mentioned a new product called Google Beam, a 3D video communications platform that transforms 2D video streams into a realistic experience. The product was in development for many years under a research effort called Project Starline.
Behind the scenes, an array of six cameras captures participants from different angles.
“With AI we can merge these video streams together and render you on a 3D light-field display with near perfect head tracking down to the millimeter and at 60 frames per second, all in real time,” Pichai said.
The result, Pichai said, was a much more natural and deeply immersive conversational experience.
The first Google Beam devices will be available for early customers later this year. Google is partnering with device maker HP, which will share more information about these devices a few weeks from now.
Traditional videoconferencing reduces many natural social cues that people experience in face-to-face interactions, which is where something like Google Beam fits in, said J.P. Gownder, vice president and principal analyst on Forrester’s Future of Work team.
“Those subtle cues contain a lot more information than people realize, so richer forms of videoconferencing that feel naturalistic could offer benefits,” said Gownder.
Since users don’t have to wear specialized equipment such as VR headgear, it will be that much more accessible.
“It’s probably a longer-term play, but if the experience is better, over time it could get traction,” Gownder said.
Google also shed more light on how it is implementing Workspace Flows, a feature that was introduced at last month’s Google Cloud Next as a way to automate work across Google Workspace apps.
Workspace Flows brings AI agents into the loop to get work done. Google can utilize AI agents called Gems that specialize in certain tasks such as customer service and work with other AI agents to complete tasks.
An onstage Gems demonstration showed how workers could use AI agents to automate processing customer service by sharing complaints with other employees via Google Chat, looking up product literature, referencing an internal genAI model for further answers, and automatically sending a possible resolution to customers.
Gems use Google’s Gemini AI model to analyze information, prioritize tasks, and generate feedback.
“These are AI experts you can create to solve particular tasks… you can actually have a team of Gems working together to solve these issues for you,” said Farhaz Karmali, Google’s product director for the Workspace ecosystem, during a keynote on the second day of Google I/O.
Karmali also shared some richer conversational features coming to Google Chat. Users will be able to subscribe to messages in a conversation, create groups, and manage memberships. These features will be helpful for ensuring targeted conversations and information reach the right agents.
“Imagine: you build a chat app that is agentic and you want to get all the information from a chat app. You can now summarize it, and this helps you take actions and so on,” Karmali said.
Other AI-powered features coming to Google Workspace include personalized smart replies, directed inbox cleanup, and fast appointment scheduling in Gmail; the ability to turn Google Slides decks into videos; and writing assistance in Google Docs that limits Gemini to specific sources designated by the user.
Google I/O overlapped with Microsoft’s Build developer conference, where the company introduced new Copilot agent features this week. Both tech giants are in the early stages of AI, and the shows focused on how they are still developing their unique agent ecosystems, Gownder said.
Source:: Computer World
By Siôn Geschwindt Again, the German-Danish startup using ancient bacteria to turn CO2 into new chemicals, is building a new bioreactor plant in Texas. The facility will be located at Texas City, a major petrochemicals park located on the Gulf Coast. The industrial centre is run by Diamond Infrastructure Solutions — a joint venture between chemicals giant Dow and Macquarie Asset Management. “We’re building a global company, and that also means taking our technology into new regions,” Again’s co-founder Max Kufner told TNW. “There is a high demand in the US for our chemicals, particularly ones that can be sustainably made on-shore.”…This story continues at The Next Web
Source:: The Next Web
This week, more than 140 civil rights and consumer protection organizations signed a letter to Congress opposing legislation that would preempt state and local laws governing artificial intelligence (AI) for the next decade.
House Republicans last week added a broad 10-year ban on state and local AI regulations to the Budget Reconciliation Bill that’s currently being debated in the House. The bill would prevent state and local oversight without providing federal alternatives.
Editor’s note: On the morning of May 22, the House approved the budget bill, sending it on to the Senate.
This year alone, about two-thirds of US states have proposed or enacted more than 500 laws governing AI technology. If passed, the federal bill would stop those laws from being enforced.
The nonprofit Center for Democracy & Technology (CDT) joined the other organizations in signing the opposition letter, which warns that removing AI protections leaves Americans vulnerable to current and emerging AI risks.
Travis Hall, the CDT’s director for state engagement, answered questions posed by Computerworld to help determine the impact of the House Reconciliation Bill’s moratorium on AI regulations.
Why is regulating AI important, and what are the potential dangers it poses without oversight? AI is a tool that can be used for significant good, but it can and already has been used for fraud and abuse, as well as in ways that can cause real harm, both intentional and unintentional — as was thoroughly discussed in the House’s own bipartisan AI Task Force Report.
These harms can range from impacting employment opportunities and workers’ rights to threatening accuracy in medical diagnoses or criminal sentencing, and many current laws have gaps and loopholes that leave AI uses in gray areas. Refusing to enact reasonable regulations places AI developers and deployers into a lawless and unaccountable zone, which will ultimately undermine the trust of the public in their continued development and use.
How do you regulate something as potentially ubiquitous as AI? There are multiple levels at which AI can be regulated. The first is through the application of sectoral laws and regulations, providing specific rules or guidance for particular use cases such as health, education, or public sector use. Regulations in these spaces are often already well established but need to be refined to adapt to the introduction of AI.
The second is that there can be general rules regarding things like transparency and accountability, which incentivize responsible behavior across the AI chain (developers, deployers, users) and can ensure that core values like privacy and security are baked in.
Why do you think the House Republicans have proposed banning states from regulating AI for such a long period of time? Proponents of the 10-year moratorium have argued that it would prevent a patchwork of regulations that could hinder the development of these technologies, and that Congress is the proper body to put rules in place.
But Congress thus far has refused to establish such a framework, and instead it’s proposing to prevent any protections at any level of government, completely abdicating its responsibility to address the serious harms we know AI can cause.
It is a gift to the largest technology companies at the expense of users — small or large — who increasingly rely on their services, as well as the American public who will be subject to unaccountable and inscrutable systems.
Can you describe some of the state statutes you believe are most important to safeguarding Americans from potential AI harms? There are a range of statutes that would be overturned, including laws that govern how state and local officials themselves procure and use these technologies.
Red and blue states alike — including Arkansas, Kentucky, and Montana — have passed bills governing the public sector’s AI procurement and use. Several states, including Colorado, Illinois, and Utah, have consumer protection and civil rights laws governing AI or automated decision systems.
This bill undermines states’ ability to enforce longstanding laws that protect their residents or to clarify how they should apply to these new technologies.
Sen. Ted Cruz, R-Texas, warns that a patchwork of state AI laws causes confusion. But should a single federal rule apply equally to rural towns and tech hubs? How can we balance national standards with local needs? The blanket preemption assumes that all of these communities are best served with no governance of AI or automated decision systems — or, more cynically, that the short-term financial interests of companies that develop and deploy AI tools should take precedence over the civil rights and economic interests of ordinary people.
While there can be a reasoned discussion about what issues need uniform rules across the country and which allow flexibility for state and local officials to set rules (an easy one would be regarding their own procurement of systems), what is being proposed is a blanket ban on state and local rules with no federal regulations in place.
Further, we have not seen, nor are we likely to see, a significant “patchwork” of protections throughout the country. The same arguments were made in the state privacy context as well, by, with one exception, states that have passed identical or nearly-identical laws, mostly written by industry. Preempting state laws to avoid a patchwork system that’s unlikely to ever exist is simply bad policy and will cause more needless harm to consumers.
Proponents of the state AI regulation moratorium have compared it to the Internet Tax Freedom Act — the “internet tax moratorium,” which helped the internet flourish in its early days. Why don’t you believe the same could be true for AI? There are a couple of key differences between the Internet Tax Freedom Act and the proposed moratorium.
First, what was being developed in the 1990s was a unified, connected, global internet. Splintering the internet into silos was (and, to be frank, still is) a real danger to the fundamental feature of the platform that allowed it to thrive. The same is not true for AI systems and models, which are a diverse set of technologies and services which are regularly customized to respond to particular use cases and needs. Having diverse sets of regulatory responsibilities is not the same threat to AI the way that it was to the nascent internet.
Second, removal of potential taxation as a means of spurring commerce is wholly different from removing consumer protections. The former encourages participation by lowering prices, while the latter adds significant cost in the form of dealing with fraud, abuse, and real-world harm.
In short, there is a massive difference between stating that an ill-defined suite of technologies is off limits from any type of intervention at the state and local level and trying to help bolster a nascent and global platform through tax incentives.
Source:: Computer World
By Adarsh Verma Privacy is no longer just a buzzword—it’s the basis of trust online. As surveillance grows and…
The post Top VPS Hosting Benefits For Running Privacy-Focused Applications appeared first on Fossbytes.
Source:: Fossbytes
By Thomas Macaulay The rapid rise of AI agents is sparking both excitement and alarm. Their power lies in their ability to complete tasks with increasing autonomy. Many can already pursue multi-step goals, make decisions, and interact with external systems — all with minimal human input. Teams of AI agents are beginning to collaborate, each handling a specialised role. As their autonomy increases, they’re poised to reshape countless business processes. Tech giants are heralding them as the future of the web. At Microsoft’s Build conference this week, the company declared that we have entered “the era of AI agents.” OpenAI CEO Sam Altman…This story continues at The Next Web
Source:: The Next Web
By Hisan Kidwai The internet is a vast ocean of billions of websites, each with its unique perks and…
The post 10 Cool Websites To Visit In 2025 appeared first on Fossbytes.
Source:: Fossbytes
In a move that casts a shadow across Apple’s upcoming Worldwide Developer’s Conference, OpenAI has announced that it will purchase io, the AI startup founded by acclaimed former Apple designer Sir Jony Ive, who helped create the iMac, iPod, and iPhone.
The deal sees Ive’s hand-picked io team of talented Apple alumni merge with OpenAI. Ive himself stays out the $6.5b deal. He will retain independence at his company LoveFrom but will be taking on “deep design and creative responsibilities across OpenAI and io.”
Toward the human interface for AI
The intention is to design the user interfaces for AI-enabled machines that will define the future of tech. (I hate to say “I told you so.“)
“This is an extraordinary moment,” declares the OpenAI press release announcing the deal. “Computers are now seeing, thinking and understanding. Despite this unprecedented capability, our experience remains shaped by traditional products and interfaces.”
While OpenAI doesn’t quite go so far as to say the move means AI is about to enter its iPhone moment, the company quite clearly believes this to be the case. Ive famously left Apple in 2019, working as an advisor for a while until he ceased working for the company completely, just before beginning io.
“I have a growing sense that everything I have learned over the last 30 years has led me to this moment,” said Ive.
Apple echoes are everywhere
For a veteran Apple watcher, there’s a lot of echoes within the announcement. Even the press release has an Apple-like resonance, headed up by a tasteful picture of Ive with OpenAI CEO Sam Altman. Longtime Apple watchers really should not ignore these echoes.
Ive and his hand-picked team of historically important former Apple design talent, including Evans Hankey and Tang Tan, will take over design and creative at OpenAI to build AI-enabled devices people can use to make things. If that sounds familiar, think back to Apple founder Steve Jobs and his description of computers as “bicycles for the mind.” That sounds like what OpenAI now intends to make.
It isn’t just an intimation of Apple, it’s all about muscling into similar innovation space.
“I hope we can bring some of the delight, wonder and creative spirit that I first felt using an Apple Computer 30 years ago,” said Sam Altman. You can watch a short video featuring Altman and Ive discussing their plans here.
A change in the balance
Of course, Apple has its own relationship with OpenAI, but the appointment of its acclaimed former designer to the company will change the balance of power — particularly as Apple itself is struggling with artificial intelligence.
To put the deal into some kind of context, analyst firm Gartner expects worldwide genAI spending to reach a total of $644 billion in 2025, an increase of 76.4% from 2024. This spend includes a huge increase in sales of AI devices, particularly servers and smartphones.
“By 2026, generative design AI will automate 60% of the design effort for new websites and mobile apps,” writes Gartner Market Databook, which anticipates that by 2026, over 100 million humans will “engage robo-colleagues (synthetic virtual colleagues) to contribute to enterprise work.”
An analyst perspective
So, what does Gartner think the deal means for OpenAI, Apple, and the future of tech?
I spoke with Chirag Dekate, Gartner VP and analyst for quantum technologies, AI infrastructures, and supercomputing. He thinks the arrangement will put OpenAI in competition with all the big hardware players in tech, and, perhaps more importantly, reflects an evolutionary step in AI, one that ends up with far more intelligent devices that feel natural to use. I reproduce his analysis below, as it’s far too wide in scope to paraphrase.
What does this deal mean for OpenAI?
Dekate: “This marks a next phase of evolution for OpenAI. Market trends as indicated by Google at their I/O event yesterday, Meta, and other innovators, are clear: Leadership in AI is not just about building powerful models anymore, it’s about shaping the entire experience around AI. Bringing Jony Ive on board to design AI-native hardware shows that like Google, Meta, and peers, OpenAI is serious about creating devices where the tech and the design work hand in hand.
“Until now, Open AI was reliant on its peers and ecosystems in the cloud to diffuse AI into products and experiences. With this acquisition, Open AI, rather than relying on others to bring its models to life, is stepping into the driver’s seat. OpenAI wants to craft the physical touchpoints of AI themselves, devices that feel intuitive and indispensable in everyday experiences.
“This acquisition is also a strategic move. With this kind of vertical integration, OpenAI is positioning itself to go head-to-head with the likes of Google, Meta, and Tesla, not just on software, but on how we experience AI in the real world.”
How will this impact Apple and its user base?
Dekate: “This is an interesting moment for Apple. With Ive, the company’s longtime design visionary, helping build the next generation of AI devices outside of Apple, it could introduce new ways for people to interact with technology, possibly in ways that challenge Apple’s current product thinking. Today’s iPhone experience — and, more broadly speaking, the Apple experience — leaves a lot to be desired. It is expensive, clunky, and feels dated, especially around AI.
“Here the lack of AI nativity within Apple is clear and experienceable for most Apple users. Android experiences from Samsung, and Google Pixel offer more AI native infusion in a way Apple doesn’t. For Apple users, it means more options on the horizon. If OpenAI and Ive succeed, we could see the emergence of new layers of abstraction-designed AI devices that rival Apple’s in terms of experience and aesthetics but are more innovative and ready for AI-native era in a way Apple’s current products aren’t.
“That said, Apple isn’t standing still. They’re likely to ramp up their own AI integration, maybe even explore new device categories to stay ahead. It’s not a threat to Apple’s ecosystem yet, but it is a reminder that in an AI-native era, yesterday’s leaders may not be always have an advantage if they do not have AI-native cores.”
What’s the bigger picture for the industry?
Dekate: “This collaboration is part of a broader shift: AI is moving from digital and into the physical world. We’re seeing it with Google’s robotics and XR efforts, Meta’s smart glasses, Tesla’s Optimus, and Nvidia’s AI platforms. OpenAI’s potential move into devices and physical AI is an accelerant.
“The future isn’t just smarter software; it’s intelligent devices that feel natural to use. The industry is heading toward AI-first hardware, designed from the ground up for seamless, human-like interaction. And in that world, design matters more than ever.
“As AI becomes part of how we live and work, the companies that can make that experience intuitive, elegant, and even joyful, like Ive has done in his past, will lead the way.”
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
Source:: Computer World
By Siôn Geschwindt Europe’s not lacking talent — it’s lacking confidence. That’s the verdict from Meta’s chief AI scientist Yann LeCun, who says an “inferiority complex” among European media and investors is holding back the continent’s tech industry. “The main reason why the European tech industry is small is a mistaken assumption of technological inferiority on the part of the European media,” wrote LeCun in an X post. “Perhaps more importantly, there was a similar inferiority complex on the part of investors, which made them less willing to take risks when the mere possibility of an American competitor would rear its head. That…This story continues at The Next Web
Source:: The Next Web
Over the next few years, agentic AI is expected to bring not only rapid technological breakthroughs, but a societal transformation, redefining how we live, work and interact with the world. And this shift is happening quickly.
“By 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously,” according to research firm Gartner.
Unlike traditional AI, which typically follows preset rules or algorithms, agentic AI adapts to new situations, learns from experiences, and operates independently to pursue goals without human intervention. In short, agentic AI empowers systems to act autonomously, making decisions and executing tasks — even communicating directly with other AI agents — with little or no human involvement.
One key driver is the growing sophistication of large language models (LLMs), which provide the “brains” for these agents. Agentic AI will enable machines to interact with the physical world with unprecedented intelligence, allowing them to perform complex tasks in dynamic environments, which could be especially useful for industries facing labor shortages or hazardous conditions.
The rise of agentic AI also brings security and ethical concerns. Ensuring these autonomous systems operate safely, transparently and responsibly will require governance frameworks and testing. Preventing the law of unintended consequences will also require human vigilance.
Because job displacement is a potential outcome, strategies for retraining and upskilling workers will be needed as the technology necessitate a shift in how people approach work, emphasizing collaboration between humans and intelligent machines.
To stay on top of this evolving technology, follow this page for ongoing agentic AI coverage from Computerworld and Foundry’s other publications.
Agentic AI news and insights
Putting agentic AI to work in Firebase Studio
May 21, 2025: Putting agentic AI to work in software engineering can be done in a variety of ways. Some agents work independently of the developer’s environment, working essentially like a remote developer. Other agents directly within a developer’s own environment. Google’s Firebase Studio is an example of the latter, drawing on Google’s Gemini LLM o help developers prototype and build applications .
Why is Microsoft offering to turn websites into AI apps with NLWeb?
May 20. 2025: NLWeb, short for Natural Language Web, is designed to help enterprises build a natural language interface for their websites using the model of their choice and data to answer user queries about the contents of the website. Microsoft hopes to stake its claim on the agentic web before rivals Google and Amazon do.
Databricks to acquire open-source database startup Neon to build the next wave of AI agents
May 14, 2025: Agentic AI requires a new type of architecture because traditional workflows create gridlock, dragging down speed and performance. To get ahead in this next generation of app building, Databricks announced it will purchase Neon, an open-source serverless Postgres company.
Agentic mesh: The future of enterprise agent ecosystems
May 13, 2025: Nvidia CEO Jensen Huang predicts we’ll soon see “a couple of hundred million digital agents” inside the enterprise. Microsoft CEO Satya Nadella takes it even further: “Agents will replace all software.”
Google to unveil AI agent for developers at I/O, expand Gemini integration
May 13, 2025: Google is expected to unveil a new AI agent aimed at helping software developers manage tasks across the coding lifecycle, including task execution and documentation. The tool has reportedly been demonstrated to employees and select external developers ahead of the company’s annual I/O conference.
Nvidia, ServiceNow engineer open-source model to create AI agents
May 6, 2025: Nvidia and ServiceNow have created an AI model that can help companies create learning AI agents to automate corporate workloads. The open-source Apriel model, available generally in the second quarter on HuggingFace, will help create AI agents that can make decisions around IT, human resources and customer-service functions.
How IT leaders use agentic AI for business workflows
April 30, 2025: Jay Upchurch, CIO at SAS, backs agentic AI to enhance sales, marketing, IT, and HR motions. “Agentic AI can make sales more effective by handling lead scoring, assisting with customer segmentation, and optimizing targeted outreach,” he says.
Microsoft sees AI agents shaking up org charts, eliminating traditional functions
April 28, 2025: As companies increasingly automate work processes using agents, traditional functions such as finance, marketing, and engineering may fall away, giving rise to an ‘agent boss’ era of delegation and orchestration of myriad bots.
Cisco automates AI-driven security across enterprise networks
April 28, 2025: Cisco announced a range of AI-driven security enhancements, including improved threat detection and response capabilities in Cisco XDR and Splunk Security, new AI agents, and integration between Cisco’s AI Defense platform and ServiceNow SecOps.
Hype versus execution in agentic AI
April 25, 2025: Agentic AI promises autonomous systems capable of reasoning, making decisions, and dynamically adapting to changing conditions. The allure lies in machines operating independently, free of human intervention, streamlining processes and enhancing efficiency at unprecedented scales. But David Linthicum writes, don’t be swept up by ambitious promises.
Agents are here — but can you see what they’re doing?
April 23, 2025: As the agentic AI models powering individual agents get smarter, the use cases for agentic AI systems get more ambitious — and the risks posed by these systems increase exponentially.A multicloud experiment in agentic AI: Lessons learned
Agentic AI might soon get into cryptocurrency trading — what could possibly go wron
April 15, 2025: Agentic AI promises to simplify complex tasks such as crypto trading or managing digital assets by automating decisions, enhancing accessibility, and masking technical complexity.
Agentic AI is both boon and bane for security pros
April 15, 2025: Cybersecurity is at a crossroads with agentic AI. It’s a powerful tool that can create reams of code in a blink of an eye, find and defuse threats, and be used so decisively and defensively. This has proved to be a huge force multiplier and productivity boon. But while powerful, agentic AI isn’t dependable, and that is the conundrum.
AI agents vs. agentic AI: What do enterprises want?
April 15, 2025: Now that this AI agent story has morphed into “agentic AI,” it seems to have taken on the same big-cloud-AI flavor that enteriprise already rejected. What do they want from AI agents, why is “agentic” thinking wrong, and where is this all headed?
A multicloud experiment in agentic AI: Lessons learned
April 11, 2025: Turns out you really can build a decentralized AI system that operates successfully across multiple public cloud providers. It’s both challenging and costly.
Google adds open source framework for building agents to Vertex AI
April 9, 2025: Google is adding a new open source framework for building agents to its AI and machine learning platform Vertex AI, along with other updates to help deploy and maintain these agents. The open source Agent Development Kit (ADK) will make it possible to build an AI agent in under 100 lines of Python code. It expects to add support for more languages later this year.
Google’s Agent2Agent open protocol aims to connect disparate agents
April 9, 2025: Google has taken the covers off a new open protocol — Agent2Agent (A2A) — that aims to connect agents across disparate ecosystems.. At its annual Cloud Next conference, Google said that the A2A protocol will enable enterprises to adopt agents more readily as it bypasses the challenge of agents that are built on different vendor ecosystems not being able to communicate with each other.
Riverbed bolsters AIOps platform with predictive and agentic AI
April 8, 2025: Riverbed unveiled updates to its AIOps and observability platform that the company says will transform how IT organizations manage complex distributed infrastructure and data more efficiently. Expanded AI capabilities are aimed at making it easier to manage AIOps and enabling IT organizations to transition from reactive to predictive IT operations.
Microsoft’s newest AI agents can detail how they reason
March 26, 2025: If you’re wondering how AI agents work, Microsoft’s new Copilot AI agents provide real-time answers on how data is being analyzed and sourced to reach results. The Researcher and Analyst agents take a deeper look at data sources such as email, chat or databases within an organization to produce research reports, analyze strategies, or convert raw information into meaningful data.
Microsoft launches AI agents to automate cybersecurity amid rising threats
March 26, 2025: Microsoft has introduced a new set of AI agents for its Security Copilot platform, designed to automate key cybersecurity functions as organizations face increasingly complex and fast-moving digital threats. The new tools focus on tasks such as phishing detection, data protection, and identity management.
How AI agents work
March 24, 2025: By leveraging technologies such as machine learning, natural language processing (NLP), and contextual understanding, AI agents can operate independently, even partnering with other agents to perform complex tasks.
5 top business use cases for AI agents
March 19, 2025: AI agents are poised to transform the enterprise, from automating mundane tasks to driving customer service and innovation. But having strong guardrails in place will be key to success.
Nvidia launches AgentIQ toolkit to connect disparate AI agents
March 21, 2025: As enterprises look to adopt agents and agentic AI to boost the efficiency of their applications, Nvidia this week introduced a new open-source software library — AgentIQ toolkit — to help developers connect disparate agents and agent frameworks..
Deloitte unveils agentic AI platform
March 18, 2025: At Nvidia GTC 2025 in San Jose, Deloitte announced Zora AI, a new agentic AI platform that offers a portfolio of AI agents for finance, human capital, supply chain, procurement, sales and marketing, and customer service.The platform draws on Deloitte’s experience from its technology, risk, tax, and audit businesses, and is integrated with all major enterprise software platforms.
The dawn of agentic AI: Are we ready for autonomous technology?
March 15, 2025: Much of the AI work prior has focused on large language models (LLMs) with a goal to give prompts to get knowledge out of the unstructured data. So it’s a question-and-answer process. Agentic AI goes beyond that. You can give it a task that might involve a complex set of steps that can change each time.
How to know a business process is ripe for agentic AI
March 11, 2025: Deloitte predicts that in 2025, 25% of companies that use generative AI will launch agentic AI pilots or proofs of concept, growing to 50% in 2027. The firm says some agentic AI applications, in some industries and for some use cases, could see actual adoption into existing workflows this year.
With new division, AWS bets big on agentic AI automation
March 6, 2025: Amazon Web Services customers can expect to hear a lot more about agentic AI from AWS in future with the news that the company is setting up a dedicated unit to promote the technology on its platform.
How agentic AI makes decisions and solves problems
March 6, 2025: GenAI’s latest big step forward has been the arrival of autonomous AI agents. Agentic AI is based on AI-enabled applications capable of perceiving their environment, making decisions, and taking actions to achieve specific goals.
CIOs are bullish on AI agents. IT employees? Not so much
Feb. 4, 2025: Most CIOs and CTOs are bullish on agentic AI, believing the emerging technology will soon become essential to their enterprises, but lower-level IT pros who will be tasked with implementing agents have serious doubts.
The next AI wave — agents — should come with warning labels. Is now the right time to invest in them?
Jan.13, 2025: The next wave of artificial intelligence (AI) adoption is already under way, as AI agents — AI applications that can function independently and execute complex workflows with minimal or limited direct human oversight — are being rolled out across the tech industry.
AI agents are unlike any technology ever
Dec. 1, 2024: The agents are coming, and they represent a fundamental shift in the role artificial intelligence plays in businesses, governments, and our lives.
AI agents are coming to work — here’s what businesses need to know
Nov. 21, 2024: AI agents will soon be everywhere, automating complex business processes and taking care of mundane tasks for workers — at least that’s the claim of various software vendors that are quickly adding intelligent bots to a wide range of work apps.
Agentic AI swarms are headed your way
November 1, 2024: OpenAI launched an experimental framework called Swarm. It’s a “lightweight” system for the development of agentic AI swarms, which are networks of autonomous AI agents able to work together to handle complex tasks without human intervention, according to OpenAI.
Is now the right time to invest in implementing agentic AI?
October 31, 2024: While software vendors say their current agentic AI-based offerings are easy to implement, analysts say that’s far from the truth.
Source:: Computer World
By Deepti Pathak Want to listen to music with friends in real time, no matter where they are? Spotify…
The post How To Start a Jam on Spotify? appeared first on Fossbytes.
Source:: Fossbytes
By Siôn Geschwindt Paris-based AI startup Veesion has secured €38mn to fuel expansion to the US — where it looks to help cure the country’s shoplifting “epidemic.” Veesion’s AI-based computer vision software is trained to spot gestures in security camera feeds, such as a shopper putting an item in their pocket. If it sees something suspicious, the AI pings the store owner or security guard via an app, where it displays a recording of the activity. The user then makes the final judgment on whether the situation qualifies as theft. The software comes in a small box that plugs into a shop’s existing…This story continues at The Next Web
Source:: The Next Web
Do you want to make a podcast from notes you record on your iPhone? You can, as Google has introduced an iOS version (and an Android version) of its popular NotebookLM tool, which can do this, among other things.
The news follows hot on the heels of speculation that Apple may try to overcome shortcomings in its own AI development by opening up its platform to third-party AI services in addition to ChatGPT and Apple Intelligence. It may be relevant to point out that Apple this week made it possible to use Google Translate instead of Apple’s own Translate app on iPhones.
What is NotebookLM?
Notebook LM has won a ton of praise since it appeared. It is a really useful document summarization system that is very handy for researchers — and can even turn topics you write about into engaging and thought-provoking podcasts. The service achieves this through use of Google’s Gemini genAI system, which seems to be improving rapidly when it comes to focused tasks.
“We’ve received a lot of great feedback from the millions of people using NotebookLM, our tool for understanding and engaging with complex information. One of the most frequent requests has been for a mobile app — for listening to Audio Overviews on the go, asking questions about sources in the moment, and sharing content directly to NotebookLM while browsing,” said Google when it announced the new apps.
Making a podcast on your iPhone
NotebookLM has been available as a web app, and now also as an app for iOS and Android. Once installed, you can use the mobile app to create new notes and access those you may already have created via your Google account. You can also add new sources to notes and create podcasts of those notes. But one of the best new features is the ability to get involved in the podcast/conversation.
Tap the Join button and you can interact with the AI-generated hosts, asking them questions or steering the conversation. It’s remarkable, particularly if you are still trying to explain which song you want Siri to play in Apple Music.
It shows the extent to which Apple’s AI services are playing catch-up and may also be why Apple’s management is thinking about opening up the company’s platform to third-party AI services.
Is this how Apple will make AI services optional?
The move to make Google Translate an option for users shows how that may be done. Just as Apple is being forced to permit users to choose between browsers in Europe, the options tool for Translation lets you select which service to use for that.
Finding a way to offer these choices while preserving platform integrity is easier said than done. Apple has admitted to having thousands of engineers tasked with figuring out how to make that possible.
But as the company moves forward with developing solutions that deliver such choice, it is also creating the template we will probably see it follow as it moves to offer up support for different forms of AI services on its devices.
With that in mind, it is likely that, as AI services introduce apps for Apple’s systems, the company will introduce a new setting in which users will be able to choose what service to use. It is possible that Apple will need to keep Apple Intelligence as the first point of contact, acting as a kind of concierge for queries, which it then directs to an appropriate AI. Users will then select which service will offer the default AI.
What about people who don’t want to use these services?
For enterprise users, this poses additional challenges. To avoid data leaks, not every business will be prepared to authorize employees to use every available genAI model. That challenge implies that Apple will also need to build APIs for Mobile Device Management to enable IT to switch off access to these third-party genAI models for managed devices.
The problem with access must therefore logically extend to app-based control. IT will want to be able to prevent people from using apps such as NotebookLM on managed devices, presumably by setting restrictions on the use of certain apps.
It also seems viable to expect Apple to offer up an additional choice — one in which users are given the opportunity to select to stay with a purely Apple experience. After all, that should also be an option for those who like it, right?
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
Source:: Computer World
China has launched 12 satellites, which experts describe as the world’s first operational space-based computing network, applying edge computing principles to orbital operations in a development that could reshape how enterprises manage global data.
The satellites, launched by Guoxing Aerospace of China, are equipped with AI systems, advanced inter-satellite communication capabilities, and onboard computing power.
The network, formally named the “Three-Body Computing Constellation” but also referred to as the “Star Computing Constellation 021” mission, represents China’s push to create computing infrastructure beyond Earth that could transform data processing while potentially reducing environmental impacts, Guoxing Aerospace said in a statement.
The constellation is a project by China to build a network of 2,800 satellites to empower real-time in-orbit computing and data processing.
“China’s orbital AI constellation is more than a technological feat—it’s a proof of concept for distributed processing, autonomy at the edge, and context-driven compute as core tenets of modern architecture,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “What makes this constellation distinctive is not just its scale, but its shift in control logic: inference and orchestration happen in orbit, across a high-speed inter-satellite mesh, without needing constant cloud fallback.”
Edge computing in space
For enterprise decision-makers, the constellation represents perhaps the most ambitious application yet of edge computing principles — processing data directly where it’s generated rather than sending everything to centralized facilities. This orbital implementation showcases how these principles can address even the most extreme bandwidth constraints.
“China’s Three-Body Computing Constellation marks a radical evolution in edge computing — demonstrating a ‘hyper-edge’ model: autonomous, localized processing under extreme latency and bandwidth constraints,” explained Deepti Sekhri, practice director at Everest Group. “This leap forces enterprise edge strategies to move beyond basic edge nodes toward resilient, AI-infused micro-infrastructures. We can expect the biggest impact in industries like manufacturing, defense, and logistics, where decisions must happen instantly and locally.”
Traditional satellites face a significant data transmission bottleneck, with a significant amount of data loss during transmission to Earth due to bandwidth constraints. This parallels challenges many enterprises encounter in remote operations with bandwidth-intensive applications.
“We are now entering a post-centralisation era — where compute is pulled toward the edge not by ideology, but by necessity,” noted Gogia. “Whether it’s orbital satellites, smart grids, or in-field robotics, AI workloads are becoming heavier, more inference-driven, and intolerant to network-induced drag. Centralised clouds won’t disappear — but for many classes of applications, they will no longer be the first stop.”
“As we enter an AI-native era shaped by data-heavy, latency-sensitive, and bandwidth-limited environments, distributed architectures are increasingly becoming relevant,” added Sekhri. “The appeal of processing data closer to where it’s generated is gaining ground fast, and space-based compute reinforces this directional shift.”
Technical specifications showcase distributed potential
As per Guoxing Aerospace, each of the 12 satellites contains specialized computing hardware capable of handling up to 744 trillion operations per second. With 12 satellites working together, the array delivers a combined computing power of 5 peta operations per second (POPS) — equivalent to 5 quadrillion calculations per second.
To put this in perspective, when fully implemented, the constellation could reach 1,000 POPS of processing capacity—potentially surpassing Earth’s most powerful systems, according to the Chinese government. The El Capitan supercomputer at Lawrence Livermore National Laboratory in California, ranked as the world’s most powerful last year, achieves approximately 1.72 POPS!
To function as a unified processing network, the constellation employs laser communication technology, achieving data transfer speeds up to 100 gigabits per second, comparable to some of the fastest terrestrial fiber optic networks, the statement added.
The constellation also incorporates 30 terabytes of on-board storage and runs an AI model with 8 billion parameters, demonstrating how sophisticated artificial intelligence applications can be deployed at the network edge. For context, this means the system can run relatively sophisticated artificial intelligence applications similar to some large language models, though smaller than the most advanced AI systems used on Earth today.
Environmental and economic advantages
The space-based distributed approach also addresses growing environmental concerns about data centers — a priority for enterprises with sustainability targets. The International Energy Agency projects that global data centers could consume more than 945 terawatt hours of electricity annually by 2030—roughly equivalent to Japan’s entire electricity usage.
“The economics of enterprise compute are undergoing a structural inversion — and China’s Three-Body Computing Constellation makes that undeniable,” observed Gogia. “As AI workloads are increasingly executed at the point of collection — in orbit or on Earth — the cost of centralisation becomes a liability.”
By processing data closer to collection points, organizations can potentially reduce the energy footprint associated with moving massive datasets across global networks. Water consumption presents another environmental challenge for traditional computing facilities that distributed approaches can help address, as major technology companies use billions of liters of water annually to cool their data centers.
Implications for global operations
For multinational enterprises, the constellation demonstrates how distributed processing systems might eventually support truly global operations. Chinese officials have positioned this initiative as establishing globally accessible, mobile, and low-carbon space-based infrastructure.
“This development presents not just an engineering milestone, but a geopolitical one — casting data governance into uncharted territory,” warned Gogia. “Orbital compute is redefining the boundaries of data sovereignty. The Three-Body system decentralises not only AI processing but also geopolitical accountability — placing inference and decision-making infrastructure into orbits that may not fall cleanly under existing legal regimes.”
“However, as compute infrastructure stretches beyond sovereign borders, enterprises face a growing ambiguity around governance and jurisdiction,” cautioned Sekhri. “Enterprises will need to evolve their risk frameworks and operational policies to address a new class of sovereignty and control challenges.”
It is expected that similar networks will likely be deployed by multiple nations in the coming years, potentially creating new infrastructure that enterprises could leverage for global operations, particularly in remote areas where traditional connectivity remains challenging. “Perhaps the more pressing question isn’t how far edge computing can go — but who ultimately orchestrates it,” Sekhri pointed out. “As the compute fabric fragments across clouds, geographies, and orbits, control may shift from centralized platforms to a more federated, contested, and geopolitically charged landscape. That’s a future worth preparing for.”
Source:: Computer World
By Deepti Pathak Horizon Aircraft has just made aviation history. The Canadian company completed a stable wing-borne flight using…
The post Horizon’s Cavorite X7 Becomes First eVTOL Aircraft To Complete Transition Flight appeared first on Fossbytes.
Source:: Fossbytes
Click Here to View the Upcoming Event Calendar