AI shopping assistants, rather than elves, may be the ones bustling behind the scenes this holiday season.
At least, Google seems to be pushing in that direction: The tech giant has released a “major AI shopping update” in Gemini that can trigger AI agents to call stores, actively track pricing, and even purchase items on their own.
This signals a new shopping paradigm, but it may also tax enterprise systems and practices.
“Google’s update moves retail closer to intent-based shopping, where the experience feels less like hunting and more like being guided to the right answer,” noted Julie Geller, a principal research director at Info-Tech Research Group.
Shoppers can even have AI agents call stores
As a natural extension of its AI-powered capabilities, Google’s AI Mode can now process shopping questions in native language. That is, shoppers can describe what they’re looking for and receive an “intelligently organized response,” with images, pricing, reviews, and inventory info.
Responses are tailored and formatted to respond to user questions and needs, Google explains. For instance, a shopper looking for “cozy sweaters for happy hour in warm autumn colors” will receive a list of shoppable images; another on the fence about moisturizers, meanwhile, may get a table with side-by-side comparisons based on product reviews.
“Buyers will be able to get very personal recommendations, and aggregate vendors much like they do with Google already,” noted Jason Andersen, VP and principal analyst at Moor Insights & Strategy.
Going a step further, users can now shop right inside Gemini and, when searching for products “near me” in AI mode, can access a “let Google call” button. As they browse, Gemini will prompt them for more specifics, and on the backend, call nearby stores to determine availability, price, and information on any special promos. The shopper will then receive an email or text with inventory information on Google’s aggregate Shopping Graph. This features 50 billion product listings, two billion of which are updated every hour, according to Google.
These capabilities are currently only available to US-based users. Google’s Duplex technology underpins these new features, along with a “big Gemini model upgrade” to help the AI identify the best stores to call, suggest follow-up questions, and summarize key conversation takeaways. “Let Google call” rolled out in search this week in the US, in categories including toys, health and beauty, and electronics.
Rounding out the shopping experience, Google is now supporting full-on agentic checkouts. Shoppers can keep tabs on certain items via a price-tracking feature — size, color, amount they want to spend — and will receive a notification when the product comes into their price range.
Then, at least with some eligible merchants, shoppers can opt to have Google purchase the item via Google Pay. Google is rolling out the capability initially with a number of US merchants, including Wayfair, Chewy, Quince, and some Shopify retailers.
Google emphasizes that AI will always ask for permission before buying anything, and will only pay after a human approves the price and shipping details.
It says these new features are “giving merchants a new way to drive foot traffic,” while also freeing up shoppers’ time.
Enterprises should rethink infrastructure
Of course, this isn’t the first time we’ve seen agents integrated into the shopping experience; Walmart, Saks Fifth Avenue, Amazon, and others have been experimenting with AI-powered shopping capabilities.
However AI agents manifest, though, experts urge enterprises to rethink their infrastructure.
Google’s new agentic shopping features can strain enterprise e-commerce systems by “collapsing the discovery and checkout journey into a rapid chain of machine actions that all hit at once,” noted Info-Tech’s Geller.
What used to unfold step by step now fires almost simultaneously. When an agent checks pricing, inventory, reviews, and delivery options in a few seconds, any messy data or slow decision point shows up immediately, she pointed out.
“Most enterprise systems were built around human browsing patterns, so this creates pressure on the parts of the stack that aren’t clean or are loosely connected,” said Geller.
The real work for enterprises is making sure the core pieces “don’t trip over one another,” she said. This requires consistent product data, category structures that make sense, and decision systems that can operate “without pulling everything else down with them.”
“Guardrails around how quickly an agent can hit different endpoints matter too, because the traffic no longer looks anything like traditional browsing,” said Geller.
Operators should keep an eye out for unusual patterns and step in early. One session triggering a sudden cluster of requests, or disagreement among availability and delivery systems are signs that the system is “being pushed in ways it wasn’t designed for,” said Geller.
However, there is a positive side, she noted: Pressure from AI agents forces companies to clean up the fundamentals, and shoppers will “feel that right away.”
“Information is clearer, options feel more aligned, and the small contradictions that usually frustrate people start to fall away,” said Geller.
There has been some “nice uptake” of these types of agentic features for standalone e-commerce, such as Amazon’s Rufus, noted Moor’s Andersen. “But Google takes it across many sites,” he said.
Google is abstracting the agent from the e-commerce site into a graph, which shouldn’t (at least in theory) impact site performance or scale. But Andersen questioned how often the graph will update and whether it could potentially create new or different pricing incentives.
For example, will Google share with sellers (or their competition) that a certain number of customers have asked to be flagged when their item drops from $120 to $99 MSRP? “That would be incredibly valuable information,” said Andersen.
Further, seller behavior could change based on Google’s graph updates, resulting in more or fewer flash sales. It also creates challenges for distribution models.
“If I have several certified sellers, will there be a race to the bottom on my product, where an agent can pit different routes to market against each other,” Andersen questioned, “and how will Google prioritize the sellers?”
At this early stage, it’s difficult to know whether vendors will have the ability to opt out of the shopping graph, or if adoption will be slow enough so they can adapt as this new buying paradigm develops, Andersen noted.
Overall, he said, “this looks great for buyers, but for sellers, it could potentially be very disruptive.”
Source:: Computer World
The laptop you use tomorrow will be like a Mac: Beautifully-designed, easy-to-use, highly secure, and packed with enough power to run artificial intelligence (AI) on device.
It will possess advanced memory handling to optimize the use of that precious component, rather than squandering cash, heat sinks, and internal real estate on RAM that doesn’t usually get used. Designed to optimize the OS it runs, it will integrate with your mobile devices, have a built-in tracker in case of loss, and will retain the best possible value on second-user markets.
Professionals will use these machines to replace desktops in countless scenarios, boosted by on-device AI capabilities — or at least, highly private and sovereign cloud-based AI services. In a world of energy scarcity, the power efficient chip in the device in your hands will be worth three in the cloud, while computing models will evolve to be small enough to work on devices at the edge most of the time.
What you’ve got
You don’t need to wait for this future; with Apple’s all-new 14-in. MacBook Pro with an M5 processor, tomorrow’s already here.
It’s a computer that ticks each one of those boxes, with the kind of best-in-class, industry-leading power and performance required to make it a capable productivity partner for years to come. In that sense, it’s much like the M5 iPad Pro I recently reviewed.
As for the specifications, I’ve been using a stylish, Apple-provided Space Black 14-in. MacBook Pro. The mid-range model ships with 1TB SSD storage, costs $1,799 and carries the powerful 10-core CPU/10-core GPU M5 processor, equipped with 16GB Unified Memory. Apple got it back, but I still have my memories.
Which begs the question, how powerful is this chip? Stopping briefly from making pictures of family members riding unicorns in DrawThings AI, I reached for my Apple ID and installed a handful of the usual tests.
Making sure to check the OS was up to date on my test model and sadly switching off the rather thrilling game I’d managed to disappear into, I set them running (one at a time). These are the results I gained on the test machine. (They seem to be in the same range as data found online.)
GeekBench 6
Single-core: 4,250
Multi-core: 17,819
OpenCL: 48,470.
CineBench
CPU: Single-Core: 2,464. Multicore: 15,745
MP ratio: 6..39
Blackmagic Disk Speed Test
Write: 6,500.4 MB/s
Read: 6,774.3 MB/s
Apple
What do the numbers mean?
Given test results really only mean something to a relatively small number of folks, let me paraphrase the significance of these data points.
The Geekbench score means these Macs are among the top three computers in the world when it comes to processor intensive operations (the others are energy-devouring desktops you can’t pop under your arm). The score also means the M5 MacBook Pros should turn out to be the fastest Macs you’ve ever browsed the web with, as apps such as browsers, email, and all the other things you use each day tend to rely on single-core processes — and the MacBook Pro is the fastest single-core notebook you’ll find on Geekbench right now.
That means you’ll experience a noticeable difference doing the things you do every day, while also having the power to handle complex operations you might need to tackle less often.
The CineBench data is also good news. It means that if your business involves rendering images, applying complex 3D transitions, or even data modelling, these machines will crunch right through those tasks. Finally, the Blackmagic test reassures us that even when handling really large chunks of data, such as video or RAW images, you’ll have little lag while those huge files are opened and worked with.
Everything you do, from games to 3D modeling to messing around with Genmoji will look remarkable on the now customary Liquid Retina XDR display — a display that also happens to be color-accurate enough for film and television color grading.
Apple
These will do the business
As you might expect from a professional Mac, these machines will take anything you throw at them and come back for more. Serious pro users will take heart in this, as it bodes extraordinarily well for the more advanced (M5 Pro, M5 Ultra) processors we expect to appear in spring.
That Macs running those chips are likely to appear means the company now has a Mac to scale across the widest possible usage scenarios, with introduction of a low-cost MacBook expected to take an even bigger chunk out of the low and mid-range PC market.
When it comes to price, the fact you can pick up the same machine I tested for $1,799 makes these Macs an absolute steal. Yes, I know that amount isn’t peanuts, it’s a lot of money – more than I can afford on the ever-shrinking pittance I make in journalism. But if you’re a professional user doing professional tasks that require this much horsepower, the price seems plenty attractive.
Given that the Mac consistently delivers significantly higher Geekbench test results than its PC brethren, Windows laptops with Snapdragon X Elite and Intel Core Ultra 7 200 processors can’t match these devices. Indeed, by the time PCs carrying the next-gen Snapdragon/Intel chips appear next year, Apple will answer back with an even more performant M5 Pro. Apple Silicon really is winning the processor wars, and even high-end gamers will see the benefit of the performance, power, and flexibility of these machines, with much better battery life.
And a better OS
Skip the eye candy and think about it. Apple’s OSes consistently generate the highest user satisfaction scores in the business. More secure, built with privacy inside, easy to use and much-loved by consumers (your employees), Macs are cheaper to run over time, cost less in tech support, and you can swap them for real hard currency once they reach EOL in your company.
Not only this, but you get regular software updates, annual operating system updates (free), and a thriving ecosystem to support device management, security, identity and beyond. Better yet, free training is available at most Apple retail stores.
While I agree that defining what makes an OS “better” is necessarily about individual choice, it’s hard not to see how these Macs tick the right boxes. That they will also run Windows really well in emulation mode thanks to Parallels (or various flavors of Linux) means you can legitimately go Mac while maintaining legacy integration.
Innovation inside
If I had a cent every time some “influencer” moaned about Apple’s lack of innovation, I’d probably be only mildly better off, but it remains as untrue as it’s ever been. Guy Debord puts it this way, saying, “The fetishism of the commodity … attains its ultimate fulfilment in the spectacle.”
When it comes to Apple, it means many continue to seek innovation in relatively shallow things like shape and form, while ignoring the value of the heaps of innovation packed inside the company’s products. Think about the Macs you own now and what they can do in contrast to the iMac you perhaps once had in your home in the late 90s.
Sure, the devices aren’t in see through, multi-colored plastic any more. But just look at the rich set of features inside: the processor, the operating system, the components, the graphics support, the display innovation, more. Stop, think beyond the spectacle, and you will surely recognize that packed inside each Mac are literally hundreds and hundreds of years of human ingenuity, going all the way back to at least 1843 and the genius of Ada Lovelace — and probably back to alchemy itself.
Alchemy? What else do you call the weird magic in materials science inside every MacBook Pro? That alchemy is evidenced in that the Mac is made from 45% recycled material, including a 100% recycled aluminium enclosure and 100% recycled rare earths in the magnets, even down to 100% recycled cobalt in the battery.
While some of these materials owe debts to early chemistry, Apple’s deep investments in new manufacturing processes should be seen as just as innovative as the touch UI on the first iPhone. Even Alexander Graham Bell would be impressed making his first FaceTime call using that 12-megapixel Center Stage Camera, the built-in studio microphones, and superb six-speaker surround sound system. Put some music on and disappear into a beautifully productive audio bubble from this machine.
Buying advice
Not everyone needs one, but many people will want one anyway. Apple’s MacBook Air remains the go-to Mac for most of us, but if your work involves anything at all processor-intensive, then you’ll want to go Pro.
If that’s you and you happen to be using an M1 or (arguably) M2 Mac, or earlier, then this is the right upgrade for you. If you are already using an M4 you can probably wait another year before you upgrade. If you’re on a Windows PC, it’s likely that after a little culture shock (mostly around the Ctrl button) you’ll be eminently satisfied with a Mac that runs Windows better than most PCs.
Would I get one? Of course I would, I’m the Appleholic. It should be clear that tomorrow’s laptops will deliver as much as this Mac — but by then, Apple will be offering something even better. Right now, I don’t think there is a laptop that’s any better than this that isn’t also made by Apple.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
Source:: Computer World
The gist:
Salesforce will add Doti’s technology to Slack, where it can answer workers’ questions using enterprise data.
The deal will help Salesforce catch up with rivals already developing their own AI-based enterprise search tools.
Salesforce has entered into a definitive agreement to acquire Israeli startup, Doti, aiming to enhance AI-based enterprise search capabilities offered via Slack.
The demand for efficient data retrieval and interpretation has been growing within enterprises, driven by the need to streamline workflows and increase productivity, thereby accelerating decision-making.
The global enterprise search market is projected to reach $12.2 billion by 2032 from just $5 billion in 2022, growing at a CAGR of 9.6%, a report from Allied Market Research showed.
Slack itself has been offering an enterprise AI search capability since March to help enterprises discover information from applications and services, building on its previously released ability to search information within its platform.
Given the market opportunity, the acquisition is a no-brainer as Salesforce would want to advance its capabilities in the space, analysts pointed out, referring to Doti’s current product offering, which is a AI-based enterprise search bot that can be interfaced with on Slack to surface insights from across applications and services, such as Datadog, GitLab, Jira, Confluence, , Notion, Slack, Salesforce, Monday, and Zendesk, among others.
“Doti supports both humans and AI agents in real-time by not only retrieving but also interpreting information. That should be the real reason why Salesforce wants it. The company has been pushing towards a model where Slack becomes the primary workspace and Agentforce powers automated workflows inside it,” said Ashwin Venkatesan, executive research leader at HFS Research.
“In that vision, Doti provides Salesforce with the missing intelligence layer, transforming conversations into accurate answers and actions. In simple terms, it strengthens the bridge between chat, context, and execution, which is central to Salesforce’s agent-driven roadmap,” Venkatesan added.
Explaining further Salesforce’s rationale to acquire Doti, Venkatesan pointed out that Doti has already achieved a few complex parts needed to execute Salesforce’s vision of combining Slack and Agentforce, including building a knowledge-graph backbone, an auto-answering layer that behaves more like an assistant than a search bar, flexible deployment options, and deep, native integration with Slack.
“…getting to Doti’s level of maturity would have taken years, and proper execution would have been a key challenge. That’s why an in-house build wasn’t the practical method,” Venkatesan said.
However, analysts pointed out that Slack, with or without Doti’s expertise, faces pressure from other vendors, including Microsoft, Google, and AWS, in the AI-based enterprise search space.
The current competitive landscape comprises three broad groups: large platform providers such as Microsoft, Google, and AWS; specialist search engines such as Coveo, Sinequa, Lucidworks, and Elastic; and assistant-layer players like Glean, which sit directly in the workflow, Venkatesan said.
Doti’s team will join Salesforce’s AI R&D hub in Israel post the acquisition, which is expected to close by January.
Source:: Computer World
By Deepti Pathak If your right AirPod is not charging or refuses to play any sound, you’re not alone….
The post How To Fix Your Right AirPod Not Charging? appeared first on Fossbytes.
Source:: Fossbytes
By Deepti Pathak Choosing the right laptop for work is not that easy. You need something fast, something reliable,…
The post The 5 Best Business Laptops Built for Work In 2025 appeared first on Fossbytes.
Source:: Fossbytes
In a reality attack destined no doubt to be completely ignored by ideologically deluded regulators and cash-hungry competitors, Apple has published an extensive report that proves the anticipated benefits of lower App Store commissions are not reaching European consumers at all.
Not only that, but even the developers who do benefit from this ham-fisted attempt at market liberalization aren’t based in Europe.
Are you really surprised?
After all, the initial implantation of these laws is based on theory, rather than practice. It is, surely, obvious that under free market theory, people will sell goods and services for as much as the market can sustain.
That means that making it cheaper to sell those goods (by App Store changes) will not automatically translate into any wider consumer benefit. But it is more likely to turn into yet more profit for those with goods on sale.
In that respect, there can be no tangible consumer benefits from App Store liberalization, so long as prices charged at that store reflect market demand. All that’s really happening is a different split in profit share.
Who cares?
The problem is that consumers are directly harmed by the way in which this new fiscal carve up is created. That’s because they are forced to accept heightened security and privacy risks as store fronts multiply — even as regulation over the privacy and security of those stores remains relatively weak.
Plus, in the case of App Stores, this also means device vendors (Apple, in particular) end up being forced to provide tech support for people who have problems installing apps from third-party operations. Sure, Apple might not have a legal responsibility to sort these problems out, but it is a company with relatively ethical values and will no doubt spend time trying to help its customers. That’s a cadre of free tech support for those third-party app stores — profitable for them, but at the cost of higher running costs for Apple and a degraded user experience for the rest of us.
Today’s report doesn’t go into all of this, of course. But it’s hard not to see how its criticisms point to the logical conclusion that far from benefitting consumers, App Store liberalization has simply exposed them to potential fraud and other harms, inconsistent user experiences, security threats — all so a few more dollars can land in the laps of the multi-millionaires who paid so much cold hard cash to lobbyists, politicians, and PRs to complain about the so-called “Apple Tax.”
Wake up, people: These folks didn’t resent that so-called tax because you paid it; they resented it because they didn’t get to keep all of it.
What really happens
And that’s precisely what seems to be happening, according to the Apple report. It’s important to note that this report was conducted by economics experts at Analysis Group (paid for by Apple). I won’t paraphrase the entire thing here; you can read it yourself and draw your own conclusions. What I have done is selected just three choice quotes to demonstrate the argument:
“The five top-selling developers in EU App Store storefronts in the three-month period prior to adopting the alternative business terms kept the price of their most popular product (defined as a paid app or a specific in-app purchase, such as a particular subscription or a given number of virtual coins) unchanged, even though they experienced a substantial reduction in the commission rate they paid.”
“Developers’ decision not to pass on commission savings to EU users mirrors Apple’s past experience following the launch of the Small Business Program, which reduced commission rates from 30% to 15% for tens of thousands of small developers beginning in 2021. Less than 5% of those developers’ apps exhibited any price decreases whatsoever after their commission rates decreased.”
“The findings of this study demonstrate that commission savings as a result of the DMA have not led to price decreases for customers and overwhelmingly flowed to developers outside the EU. Despite lower commission rates, developers maintained, or increased, the prices of 91% of products, accounting for 94% of transactions, and the small number of price decreases appear mostly, if not entirely, unrelated to the lower fees. In addition to developers keeping most of the commission savings for themselves, over 86% of the savings went to developers based outside of the EU.”
So, next time someone bewails the Apple tax, just look at what they do. Are they genuinely complaining about Apple’s business practices, or do they just want to take a bigger slice of the pie? Following the money (and the data) suggests the answer.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
Source:: Computer World
By Hisan Kidwai Wordle is the super fun game from the NYT, where you put your vocabulary to the…
The post Wordle Hints & Answer For Today: November 13 appeared first on Fossbytes.
Source:: Fossbytes
A year after its acquisition of Cameyo, Google is making the virtual application delivery platform generally available and integrating it with Chrome Enterprise.
Cameyo’s virtualization technology enables business to access “legacy” Windows applications —from ERP tools to AutoCAD or Excel — a major limitation for Chromebooks in the workplace. It differs from traditional virtual desktop infrastructure (VDI) tools by delivering just the individual app a user requires, Google said; the app is then accessed via the Chrome browser or as a progressive web app.
On Wednesday, Google announced that Cameyo by Google, as it’s now known, is generally available, priced at $132 per user a year.
“Cameyo by Google helps us deliver on our vision for the future of work, one where you can access all of your applications side by side,” Rob Beard, product manager at Google, said during a briefing. It enables a “workspace where web apps and legacy applications are virtually the same, where the virtualization layer is invisible to the end users.”
IT admins, Beard said, can “deliver apps to end users’ devices in minutes, without having to configure or even touch those end user devices.”
Google has also added an integration between Cameyo and Chrome Enterprise Premium, is browser and device management tool. This will ease the “deployment and management” of virtual apps, saidBeard, with access controls available via the Google Admin Console. The integration enables additional security features around virtual apps, such as URL filtering and data loss prevention (to stopusers from copying data out of an SAP app running in Cameyo, for instance).
Another addition is the ability for Google’s Gemini AI assistant to interact with Cameyo-basedWindows apps. Otherwise, Cameyo users shouldn’t notice much difference from the product they’ve been using already, said Beard.
Cameyo by Google could help organizations that standardize on Google’s enterprise offerings continue to use legacy Windows apps, said Tom Mainelli, IDC group vice president, device and consumer research.
“Cameyo looks great as a standalone virtualization solution, but what’s powerful about this launch is its increased integration with the broader Google enterprise suite,” he said.
The ability to access Google’s Gemini AI in legacy apps could prove useful for end users, he said, while the Chrome Enterprise Premium integration means customers can “layer on additional security features” to those virtual apps.
While Cameyo by Google won’t convince a fully Windows-based enterprise to move entirely to Google’s ecosystem, it can “make it easier for organizations that are curious about and experimenting with Google’s various enterprise offerings to move more users into that ecosystem,” he said.
Source:: Computer World
Western European organizations are ramping up investments in local and regional cloud providers because growing geopolitical tensions are raising concerns that access to global cloud services could be disrupted for political reasons.
A survey of 214 CIOs and IT leaders in Western Europe, conducted by Gartner between May and June, found that more than 61% plan to increase their reliance on local and regional cloud providers due to geopolitics. More than half (53%) plan to restrict future use of global cloud providers for the same reason — and 44% reported they’re already limiting use.
“It shows that geopolitics absolutely have an impact on the decision making of organizations when it comes to cloud,” said Rene Buest, senior director analyst at Gartner.
There are several reasons for an increased focus on digital sovereignty, according to Buest.
One is a fear the US government could block access to cloud services. For instance, the International Criminal Court’s (ICC) chief prosecutor, Karim Khan, reportedly lost access to Microsoft services earlier this year, several months after US President Donald Trump placed sanctions on the organization. Microsoft has since denied it suspended services for the ICC.
Other cases, such as Adobe cutting off Venezuelan customers in compliance with US sanctions against that country, reinforced concerns about who has access to their data.
“Digital sovereignty has a lot to do with control — who has control over the technology or over the cloud I’m on,” said Buest. “And if I’m not able to control it, there’s the likelihood that I won’t be operational at some point anymore.”
There’s also uncertainty around trade negotiations, with many worried that tariffs could be placed on US cloud services.
Amid geopolitical uncertainty, many European organizations are turning to alternatives to established cloud providers, and 55% plan to expand their use of open-source software, according to the Gartner survey.
Several public sector organizations in the region are moving to open source digital workplace apps. The German federal state of Schleswig-Holstein is replacing Microsoft software with LibreOffice, Nextcloud, and Open-X-Change, while the city of Lyon in France, will replace Windows and Office with open-source alternatives. And the Austrian Armed Forces will reportedly deploy LibreOffice to 16,000 workstations.
Digital sovereignty is expected to grow as a priority globally, according to Gartner. By 2030, the analyst firm forecasts that more than 75% of all enterprises outside the US will have a digital sovereignty strategy that involves local or regional cloud usage.
Heightened interest in digital sovereignty will result in increased spending on local clouds and open-source applications, but the likes of Amazon, Google, and Microsoft are unlikely to be too troubled. These hyperscalers account for 70% of the IaaS, PaaS, and hosted private cloud market in Europe, according to a report by Synergry Research Group from July. US tech giants also lead in SaaS, with Microsoft 365 and Google Workspace widely used by private and public sector organizations.
Buest expects some cloud spending to shift to European providers in the coming years, boosting revenues for local suppliers. But a mass exodus of customers from global cloud providers is unlikely.
“We won’t see a big shift, or that the hyperscalers lose an immense amount of market share,” said Buest. “It’s still a drop in the ocean.”
For CIOs and other IT leaders, his advice is to select which workloads are appropriate for a sovereign cloud and when to rely on hyperscalers — a room booking application would contain less sensitve corporate information, for instance. They should assess the current level of sovereignty and control over their data and understand the likelihood of various risk scenarios.
“So basically, [it’s] good old risk management,” said Buest.
Source:: Computer World
By Partner Content Imagine you’re at a café or airport and you see a free Wi-Fi network. You connect…
The post How To Stay Safe on Public Wi-Fi: Beginner’s Guide to Privacy and Proxies appeared first on Fossbytes.
Source:: Fossbytes
By Deepti Pathak Over the last year, ChatGPT has become one of the most utilized tools for writing, research,…
The post 5 ChatGPT Tips You Should Stop Using Right Now appeared first on Fossbytes.
Source:: Fossbytes
By Deepti Pathak Music tastes can change every week, and Spotify has just made it easier to keep track…
The post New Spotify Update Helps You Track Your Music Weekly appeared first on Fossbytes.
Source:: Fossbytes
In case you hadn’t noticed, change is in the air.
Over the past few years, every day seemingly brings new tales of how businesses are still trying to integrate generative AI (genAI) tools, figure out what agentic AI can do for them, and decipher what genAI firms are really saying about the new features they routinely unveil.
There are ongoing reports that augmented or virtual reality really does have a future in the business world, AI PCs will take over the PC market in a post Windows 10 world, Arm-based PCs will change everything, and Apple has moved into the enterprise space faster than expected.
There’s a lot going on in the enterprise tech world. Add all those changes to the day-to-day standard IT job of keeping everything up running and many IT leaders and professionals can quickly get overwhelmed.
So how do you keep up?
Here are 14 ways IT departments can approach this conundrum while maintaining their sanity amid constant change — especially with everyone from the C-suite to front-line workers clamoring for the latest tech.
Set expectations. Be clear about what you can realistically accomplish in terms of new technology given your budget and manpower. Set firm boundaries and be consistent both within and outside the IT department.
Rely on trusted sources for research. Cultivate tech news sources you can rely on. Many tech developments, especially around hot topics like genAI, are routinely covered by a variety of media sources (including mainstream news outlets and tech influencers). Separate the hype from what is really going on — what’s working and what isn’t.
Communicate with peers. In addition to curating sources, having relationships with people at other organizations in your industry (and sometimes outside your industry) is a powerful way to see what colleagues are doing and get perspective. This can help lead to innovations within your IT department, opportunities for collaboration, and potential new hires.
Be open to suggestions. Being receptive to ideas inside and outside IT can be critical. No IT leader or admin is an expert on everything when it comes to emerging technology. Good ideas can come from anywhere, so it’s important to demonstrate your openness. (Not every idea will be worth pursuing, of course.)
Open proofs of concept. When ideas seem attainable and worth exploring in depth, proof-of-concept projects is a logical next step. Each should be well defined, have set timelines and measurable goals. If projects are open ended or vaguely planned, they risk becoming zombie projects that never die.
Realize not everything will work. That’s why proofs of concept and pilot projects are important. Be prepared for failure. Many if not most ideas or projects, will fail or at least go through a rough patch. But even failures can be useful learning experiences. Set expectations accordingly.
Encourage experimentation, but with guardrails. Experimentation is a good thing, be it by technical staff, executives, or everyday users. You do yourself a disservice by outlawing experimentation, but you can’t let it go unchecked. Whether for security reasons, IT resource limits or usability/user training requirements, you need to keep experiments from overtaking everything else need to do.
Shadow IT exists; use it. For years, studies have shown that shadow IT — where users quietly build their own workflows and processes and even make their own purchases without informing IT— is more prevalent than many decision makers realize. With almost any new technology, users will experiment, with or without IT’s knowledge. (This is how BYOD began.) Your best approach is to allow this to happen, and in some circumstances encourage it. Banning it isn’t an effective strategy and you might actually learn how to incorporate various tools and techniques into larger, more managed, projects.
If you say no, explain why. There’s an old adage that IT is the department of “no,” always shutting down people and ideas. Even if you’re ok with shadow IT projects, there will be times you have to draw a line. Few people like being told no, it but if you can explain your rationale, most will accept it. (Whatever idea you’ve vetoed could still reemerge in the shadows; a solid explanation of IT’s thinking gives you an opportunity to work with those employees cooperatively.)
Work with vendors, partners and consultants. No IT department is an island. Everyone has to deal with vendors, consultants and other partners to successfully get a handle on new technologies. Outside relationships can bring forward new ideas, allow IT to see things with fresh eyes and augment your internal staff. (Beware of “partners” too focused on hype — and be certain that they understand your current position and specific enterprise needs.)
Create centers of excellence. These centers can be a good way to educate staffers, execs and front-line employees about the challenges of exploring, adopting and integrating emerging technologies. This can relieve pressure on IT leaders to be up to date on every tech development and how it relates to your company. And they can help build a working group to establish expertise, use cases, best practices and needed requirements, documentation and support.
Avoid hype. Control your enthusiasm for new technology. This doesn’t mean you don’t show enthusiasm; it does mean that you operate in a “no hype” zone, where clear eyes and realism are in order.
Remember scalability, support and security. New technologies can be exciting, but IT has to consider how each will scale, the strain they’ll place on tech support, and how they could affect corporate security. As each new concept, product or initiative arises, IT always has to keep these three areas in mind.
Be open to disruption, but be realistic. GenAI, agentic AI, AI PCs — note the AI thread running through all three — are potentially massive disruptors of the status quo. IT can’t afford to be afraid of disruption, but it’s important to remain realistic about the nuts and bolts of getting new tech initiatives working —as well as the potential affects on your organization and its workers.
A host of new technologies is coming to market faster than ever. Knowing how to evaluate them and their potential impact on your business is a requirement for every IT leader. Things will never be as simple as they once were, but you can develop pathways and processes for you, your staff and organization to manage the flood of news and announcements and separate the potential from the hype.
Because the pace of change isn’t likely to slow down anytime soon.
Source:: Computer World
Later this month, Microsoft plans to enhance its M365 productivity suite with “Agentic Users,” autonomous AI agents with their own identities and access to enterprise IT systems that can collaborate with one another and with humans.
“These agents can attend meetings, edit documents, communicate via email and chat, and perform tasks autonomously,” Microsoft said in an addition to its product roadmap entitled “Microsoft Teams: Discovery and Creation of Agentic Users from Teams and M365 Agent Store.” The update will be rolled out to desktop systems worldwide beginning later in November, according to the roadmap entry.
Microsoft provided further details about its plans for Agentic Users in a message posted to the Microsoft Admin Center, according to various reports from around the web.
Microsoft MVP João Ferreira posted what he said was a copy of it a copy of LC1183300 to his personal blog.
“Agentic Users are a new class of AI-powered digital entities designed to function as autonomous, enterprise-grade virtual colleagues. Unlike traditional bots, Agentic Users are provisioned as full-fledged user objects with their own identity in the organization’s directory (via Entra ID or Azure AD), email addresses, Teams accounts, and presence in the org chart,” the purported Microsoft announcement began.
All users in enterprises with access to Microsoft teams and the Microsoft 365 Copilot store will be able to view agent templates, although only approved users will be able to create agents from those templates, it continued.
Microsoft did not respond to an email seeking confirmation of the authenticity of the message.
The post also contains images that hints at the use cases these new Agentic Users might be used for, including procurement, HR initiatives such as employee wellness, tracking team tasks, and developing workflows. There was no mention, though, of how Agentic Users compare to the many agents that M365 already offers, including its Facilitator and Project Manager agents, its Office Agent, and different flavors of Copilots for sales, service, and finance operations.
Confusion over licensing, increasing costs, and Microsoft’s revenue play
Before they can create any of the new agents, admins will need to approve a template for use and “assign the required A365 license,” the posting said. No further information was included about the nature or pricing of these licenses.
Analysts speculated how the new A365 licenses might relate to existing M365 licenses, which are typically sold on a per-user, per-month basis with an annual commitment.
“Previously released agents like Facilitator or Project Manager were bundled under M365 Copilot entitlements, with advanced actions billed via Copilot credits. A365 introduces explicit per-agent licensing and admin-controlled approval through the Agent Store and separates agent costs from human seats,” said Forrester vice president and principal analyst Charlie Dai.
Alexander Golev, partner at SAM Expert, a specialist in managing Microsoft licensing and cloud costs, suggested instead that A365 will replace M365 user licenses.
“Our expectation is that it will provide a combination of user-like access to Microsoft 365 services on a monthly/annual/3-annual fee basis plus the core functionality of M365 Copilot. Additional AI use will be charged in the same manner as with users — prepaid capacities and pay-as-you-go items. We don’t expect them to be all-inclusive,” he said.
Microsoft has been changing its licensing practices to increase its revenue significantly, Golev said. “In the recent years, they moved from server- or device-based licensing to CPU-based and then core-based.”
In its reporting of M365 revenue, Microsoft focuses on average revenue per user (ARPU), “which is now hitting its ceiling,” he said. “You can only scale so far. Earth’s population grows slower than Microsoft’s revenue targets. What we have been predicting is the move to ARPA — Average Revenue per Agent, which can scale exponentially,” Golev said.
Another licensing expert, Rich Gibbons, blogged about the new A365 licenses, saying that he expected Microsoft to use them as an opportunity to generate additional consumption-based revenue.
Risk of agent sprawl?
Analyst Pareekh Jain of Jain Consulting said he expects Microsoft to update its previously introduced agents as part of the rollout of Agentic Users, giving the existing agents their own email addresses and Microsoft Entra IDs too.
“Too many autonomous agents for overlapping or redundant tasks mirror the challenges most enterprises have faced with bot and app sprawl in prior M365 deployments. Without tight governance, enterprises could face duplication, higher spend, data security exposure, and oversight challenges,” Jain said.
But, said Dai, Entra IDs could play a pivotal role in avoiding that sprawl. “These IDs can be used to treat agents as directory-backed identities enabling lifecycle control, access reviews, and compliance policies….helping enterprises gain visibility and accountability,” he said.
Even before we learn all the details of these new Agentic Users, it’s clear that there are some aspects of them to which enterprises will have to pay particular attention.
Dai pointed out to the need for effective collaboration between IT asset management and FinOps teams, while Everest analyst Tanvi Rai said enterprises will need strong change management to train employees to supervise, validate and govern agent behavior effectively.
Despite all the uncertainty, what is evident is that Microsoft’s move will further intensify the race among rivals, such as Salesforce and ServiceNow, who are also accelerating their efforts to introduce autonomous AI-driven agents aimed at boosting productivity across enterprises.
Source:: Computer World
It’s been an uncomfortable few days for AI vendors. On Friday, the big tech companies saw $1.2 trillion wiped off their market valuations, reflecting the concerns of many analysts that AI valuations are too high and the market is heading for a serious crash.
Just a few days earlier, OpenAI CFO Sarah Friar suggested that the US government could help the industry by providing a “backstop” to guarantee commercial loans financing AI chips in data centers — although hours later she took back those words in a LinkedIn post. The same day, OpenAI CEO Sam Altman also denied the company wanted government loan guarantees in a mammoth 6,000-character post on X (formerly Twitter).
So how should CIOs view the future of their own AI investments? Financial analysts have a mixed view.
According to Shawn DuBravac of the Avrio Institute, big tech customers need to be more pragmatic, but don’t need to panic. “Companies don’t need to rewrite their strategy, but market volatility is a stress test of AI investment. The large tech companies recognize that the long-term demand for AI infrastructure is very strong.
Ilya Rybchin, principal at financial advisory firm BDO USA, said that CIOs shouldn’t be worried about the technology becoming obsolete or vendors disappearing. “Customers should be worried about the anemic return on their own AI investments, irrespective of how their vendors are performing or what the media is saying about their vendors.”
Freeze AI procurement
He had some stark advice. “Companies should freeze new AI procurement. They should stop buying tools until they can prove they’re getting value from the ones they have.” He added that many companies are buying multiple AI platforms without using any effectively. “It’s like buying three chainsaws when you haven’t learned to use the first one,” he said.
Global technology futurist Daniel Burrus of Burrus Research predicted that organizations may need to rethink staffing levels. “We’re seeing a lot of layoffs due to AI investments, particularly among coders. However, I think these companies are missing a trick. I prefer to think of AI as Augmented Intelligence as it’s about augmenting, rather than replacing.
He said that there is already a change in the air. “We are seeing companies who have laid off people hiring them back.”
Concerns that the AI market is a bubble that’s about to burst won’t entirely go away. Altman has said that OpenAI is projecting an annualized revenue run rate of $20 billion this year and is committed to spending $1.4 trillion over the next eight years. Just last week, the company signed a deal with AWS for $38 billion to host its services on Amazon’s cloud service. That’s heavy investment and there will certainly be doubts whether it can grow revenue to match that expenditure.
Burrus draws parallels with Amazon.. “It took a very long time for them to make a profit but they’re racing for the intelligence to be better than a human being, and that is going to take some time.” However, Amazon didn’t make the dizzying levels of investment that OpenAI is committing itself to.
Not an extinction-level event
There is agreement, however, that the AI industry as a whole can survive without government support for one failing company.
DuBravac said, “If OpenAI stumbles, customers would feel turbulence and disruption but nothing that they couldn’t overcome.”
And Rybchin said that it could aid the progress of AI. “OpenAI failing would not be an extinction-level event for AI. On the contrary, it could be a healthy catalyst, forcing a necessary diversification of the AI landscape, encouraging competition and innovation from a wider range of players.
Source:: Computer World
By Hisan Kidwai When consumer drones first came about, they were extraordinarily expensive, bulky, and only suitable for filmmakers…
The post Why the Ruko U11MINI 4K Is the Best Thanksgiving Gift To Capture Moments appeared first on Fossbytes.
Source:: Fossbytes
By Deepti Pathak WhatsApp uses several small icons and symbols to make conversations and calls handier for the user….
The post What Do The Different Icons & Symbols on WhatsApp? appeared first on Fossbytes.
Source:: Fossbytes
By Hisan Kidwai Free Fire Max is one of the most popular games on the planet, and for good…
The post Garena Free Fire (FF) Max Redeem Codes For Today: November 9 appeared first on Fossbytes.
Source:: Fossbytes
By Hisan Kidwai Free Fire Max is one of the most popular games on the planet, and for good…
The post Garena Free Fire (FF) Max Redeem Codes For Today: November 8 appeared first on Fossbytes.
Source:: Fossbytes
Do you think it’s time to turn an AI agent loose to do your procurement for you? As that could be a potentially expensive experiment to conduct in the real world, Microsoft is attempting to determine whether agent-to-agent ecommerce will really work, without the risk of using it in a live environment.
Earlier this week, a team of its researchers launched the Magentic Marketplace, an initiative they described as an “an open source simulation environment for exploring the numerous possibilities of agentic markets and their societal implications at scale.” It manages capabilities such as maintaining catalogs of available goods and services, implementing discovery algorithms, facilitating agent-to-agent communication, and handling simulated payments through a centralized transaction layer.
The 23-person research team wrote in a blog detailing the project that it provides “a foundation for studying these markets and guiding them toward outcomes that benefit everyone, which matters because most AI agent research focuses on isolated scenarios — a single agent completing a task or two agents negotiating a simple transaction.”
But real markets, they said, involve a large number of agents simultaneously searching, communicating, and transacting, creating complex dynamics that can’t be understood by studying agents in isolation, and capturing this complexity is essential “because real-world deployments raise critical questions about consumer welfare, market efficiency, fairness, manipulation resistance, and bias — questions that can’t be safely answered in production environments.”
They noted that even state-of-the-art models can show “notable vulnerabilities and biases in marketplace environments,” and that, in the simulations, agents “struggled with too many options, were susceptible to manipulation tactics, and showed systemic biases that created unfair advantages.”
Furthermore, they concluded that a simulation environment is crucial in helping organizations understand the interplay between market components and agents before deploying them at scale.
In their full technical paper, the researchers also detailed significant behavioral variations across agent models, which, they said, included “differential abilities to process noisy search results and varying susceptibility to manipulation tactics, with performance gaps widening as market complexity increases,” adding, “these findings underscore the importance of systematic evaluation in multi-agent economic settings. Proprietary versus open source models work differently.”
Bias and misinformation an issue
Describing Magentic Marketplace as “very interesting research,” Lian Jye Su, chief analyst at Omdia, said that despite recent advancements, foundation models still have many weaknesses, including bias and misinformation.
Thus, he said, “any e-commerce operators that wish to rely on AI agents for tasks such as procurement and recommendations need to ensure the outputs are free of these weaknesses. At the moment, there are a few approaches to achieve this goal. Guardrails and filters will enable AI agents to generate outputs that are targeted and balanced, in line with rules and requirements.”
Many enterprises, said Su, “also apply context engineering to ground AI agents by creating a dynamic system that supplies the right context, such as relevant data, tools, and memory. With these tools in place, an AI agent can be trained to behave more similarly to a human employee and align the organizational interests.”
Similarly, he said, “we can therefore apply the same philosophy to the adoption of AI agents in the enterprise sector in general. AI agents should never be allowed to behave fully autonomously without sufficient check and balance, and in critical cases, human-in-the-loop.”
Thomas Randall, research lead at Info-Tech Research Group, noted, “The key finding was that when agents have clear, structured information (like accurate product data or transparent listings), they make much better decisions.” But the findings, he said, also revealed that these agents can be easily manipulated (for example, by misleading product descriptions or hidden prompts) and that giving agents too many choices can actually make their performance worse.
That means, he said, “the quality of information and the design of the marketplace strongly affect how well these automated systems behave. Ultimately, it’s unclear what massive value-add organizations may get if they let autonomous agents take over buying and selling.”
Agentic buying ‘a broad process’
Jason Anderson, vice president and principal analyst at Moor Insights & Strategy, said the areas the researchers looked into “are well scoped, as there are many different ways to buy and sell things. But, instead of attempting to execute commerce scenarios, the team kept it pretty straightforward to more deeply understand and test agent behavior versus what humans tend to assume naturally.”
For example, he said, “[humans] tend to narrow our selection criteria quickly to two or three options, since it’s tough for people to compare a broad matrix of requirements across many potential solutions, and it turns out that model performance also goes down when there are more choices as well. So, in that way there is some similarity between humans and agents.”
Also, Anderson said, “by testing bias and manipulation, we can see other patterns such as how some models have a bias toward picking the first option that met the user’s needs rather than examining all the options and choosing the best one. These types of observations will invariably end up helping models and agents improve over time.”
He also applauded the fact that Microsoft is open sourcing the data and simulation environment. “There are so many differences in how products and solutions are selected, negotiated, and bought from B2B versus B2C, Premium versus Commodities, cultural differences and the like,” he said. “An open sourcing of this tool will be valuable in terms of how behavior can be tested and shared, all of which will lead to a future where we can trust AI to transact.”
One thing this blog made clear, he noted, “is that agentic buying should be seen as a broad process and not just about executing the transaction; there is discovery, selection, comparison, negotiation, and so forth, and we are already seeing AI and agents being used in the process.”
However, he observed, “I think we have seen more effort from agents on the sell side of the process. For instance, Amazon can help someone discover products with its AI. Salesforce discussed how its Agentforce Sales now enables agents to help customers learn more about an offering. If [they] click on a promotion and begin to ask questions, the agent can them help them through a decision-making process.”
Caution urged
On the buy side, he said, “we are not at the agent stage quite yet, but I am very sure that AI and chatbots are playing a role in commerce already. For instance, I am sure that procurement teams out there are already using chat tools to help winnow down vendors before issuing RFIs or RFPs. And probably using that same tool to write the RFP. On the consumer side, it is very much the same, as comparison shopping is a use case highlighted by agentic browsers like Comet.”
Anderson said that he would also “urge some degree of caution for large procurement organizations to retool just yet. The learnings so far suggest that we still have a lot to learn before we see a reduction of humans in the loop, and if agents were to be used, they would need to be very tightly scoped and a good set of rules between buyer and seller be negotiated, since checking ‘my agent went rogue’ is not on the pick list for returning your order (yet).”
Randall added that for e-commerce operators leaning into this, it is “imperative to present data in consistent, machine-readable formats and be transparent about prices, shipping, and returns. It also means protecting systems from malicious inputs, like text that could trick an AI buyer into making bad decisions —the liabilities in this area are not well-defined, leading to legal headaches and complexities if organizations question what their agent bought.”
Businesses, he said, should expect a future where some customers are bots, and plan policies and protections, accordingly, including authentication for legitimate agents and rules to limit abuse.
In addition, said Randall, “many companies do not have the governance in place to move forward with agentic AI. Allowing AI to act autonomously raises new governance challenges: how to ensure accountability, compliance, and safety when decisions are made by machines rather than people — especially if those decisions cannot be effectively tracked.”
Sharing the sandbox
For those who’d like to explore further, Microsoft has made Magentic Marketplace available as an open source environment for exploring agentic market dynamics, with code, datasets, and experiment templates available on GitHub and Azure AI Foundry Labs.
Source:: Computer World
Click Here to View the Upcoming Event Calendar