Rokid is bringing wearable control to smart glasses at a lower cost than Meta

Home » Archive by Category "Technology" (Page 58)

By Manisha Priyadarshini Rokid is pairing its display-free smart glasses with Mudra’s neural wristband to offer wearable, touch-free control at a significantly lower cost than Meta’s smart glasses and Neural Band.
The post Rokid is bringing wearable control to smart glasses at a lower cost than Meta appeared first on Digital Trends.

Source:: Digital Trends

Microsoft scraps planned email limits for Exchange Online

Home » Archive by Category "Technology" (Page 58)

Microsoft has backed away from its plan to introduce a limit of 2,000 emails per day in Exchange Online. The change, announced in 2024 and set to go into effect last year, was aimed at reducing the amount of online spam. But the limitation was met with fierce criticism from business users, and it is now clear that Microsoft has shelved the idea.

“Customers have shared that this limitation creates significant operational challenges. Your feedback is important, and we are committed to solutions that balance security and usability without causing unnecessary disruption,” Microsoft officials wrote on the Exchange Team blog.

Microsoft will now look for alternative solutions to combat spam and abuse of Exchange Online, according to Bleeping Computer.

Source:: Computer World

CES 2026: AI compute sees a shift from training to inference

Home » Archive by Category "Technology" (Page 58)

LAS VEGAS — Not so long ago — last year, let’s say — tech industry spending was all about the big AI companies spending billions of dollars on training ever-larger frontier AI models.

That’s rapidly changing, which is why at this year’s CES, AI inference was at the heart of the show’s keynote speeches and major announcements. (Inference is when AI models move from training to using the information they have to handle new, previously unseen data.)

Until recently, according to Lenovo CEO Yuanqing Yang, most AI spending was related to training. Approximately 80% went to creating the large language models (LLMs) that underpin generative AI, he said, with the remaining 20% to the inference side.

That is starting to change. “In the future, those numbers are reversed,” he told reporters Wednesday at CES. “Eighty percent will be on the inference and 20% will be on training. That is our forecast.”

And that’s why Lenovo launched three new inference servers on Tuesday, he said. “We definitely want to lead the trend.”

According to industry experts, that shift is already under way. In a November report, Deloitte estimated that inference workloads accounted for half of all AI compute in 2025 — a figure that will jump to two-thirds in 2026. 

The actual infrastructure spending lags a little bit, Lenovo’s Ashley Gorakhpurwalla, executive vice president and president of infrastructure solutions group, told Computerworld. “When you train foundational models, you start big, and you put all the capital in up front,” he said.

But when enterprises deploy AI, such as a chatbot, they start small and slowly scale up. “People deploy, iterate, and move forward.,” Gorakhpurwalla said. “When you first deploy a chatbot, it’s a small expense.”

However, even on the spending side, 2026 will be a big inflection year for inference, according to a December report by the Futurum Group. “We’re seeing a clear shift,” Futurum analyst Nick Patience said in the report. “Inference workloads are set to overtake training revenue by 2026.”

Enterprises are moving from experimentation to deployment, boosting the demand for AI inference servers, and are also increasing hybrid and edge deployments.

That’s the rationale for Lenovo’s decision to launch three new inferencing servers at CES this week.

The servers include the Lenovo ThinkSystem SR675i, designed to run full-sized LLMs for applications in areas like manufacturing, healthcare and financial services; the Lenovo ThinkSystem SR650i, which is designed to be scalable and easy to deploy in existing data centers; and the Lenovo ThinkEdge SE455i, a compact server built for retail, telco and industrial environments.

This isn’t Lenovo’s first foray into inferencing services, or even into small-scale inferencing servers. It released its first entry-level AI inferencing server for edge AI in March 2025.

The company also offered other servers capable of handling AI workloads that weren’t specifically marketed as inferencing servers.

There are three main drivers for enterprises looking to buy and deploy their own inference servers, said Arthur Hu, senior vice president, global CIO and chief delivery and technology officer for Lenovo’s solutions and services group. That role puts him in direct contact with enterprise customers.

First, customers are getting more strategic about how they use cloud computing, he said in an interview at CES. The public cloud is good for early experimentation or when a company needs to deploy over a large geography.

“But if you know what your workload size and predictability is, you don’t need to pay the additional premium,” he said.

Another adoption driver is the need to use data where it’s generated. “You don’t want to store all the data all the time,” Hu said. With an edge AI server, the data can be immediately used when needed, then discarded later.

Finally, there are the privacy, security and sovereignty concerns. “Everyone is very sensitive that they can control their data and govern it,” he said.

When a company runs its own AI inferencing, the data never needs to be out of its corporate hands. 

Lenovo wasn’t the only company making bets on AI inferencing this week. AMD announced the AMD Instinct MI440X GPU, designed for on-premises inferencing for enterprise AI. And while Lenovo rivals Dell and HPE didn’t announce similar hardware at CES, they did both release new inferencing servers last year.

Dell’s air-cooled PowerEdge XE9780 and XE9785 servers integrate into existing enterprise data centers, while the liquid-cooled Dell PowerEdge XE9780L and XE9785L servers support rack-scale deployment, the company announced last May.

For its part, HPE last March released its latest AI servers, including the HPE ProLiant Compute DL380a Gen12. It offers support for up to 16 GPUs as well as direct liquid cooling, and is purpose built for AI fine-tuning and inference. 

Given the rapidly increasing demand for enterprising inferencing, more announcements from the major players are likely this year.

Editor’s note: Lenovo paid for Maria Korolov’s transportation and hotel costs for this year’s CES, but had no editorial role in the creation of this story.

Source:: Computer World

Powerbeats Pro 2 are $50 off at $199.99 right now

Home » Archive by Category "Technology" (Page 58)

By Omair Khaliq Sultan If your earbuds slip mid-run or you’ve ever had to shove them back in during a lift, you already know why Powerbeats exist. The Beats Powerbeats Pro 2 true wireless active noise canceling earbuds are down to $199.99 (compared value $249.99), saving you $50. It’s not the biggest discount in the world, but it’s the […] The post Powerbeats Pro 2 are $50 off at $199.99 right now appeared first on Digital Trends.

Source:: Digital Trends

ChatGPT Health has arrived

Home » Archive by Category "Technology" (Page 58)

By Ana-Maria Stanciuc I’ve said this many times: the products we see on the market are rarely visionary leaps. Most of the time, they are mirrors. They reflect people’s habits, shortcuts, fears, and small daily behaviours. Design follows behaviour. Always has. Think about it. You probably know at least one person who already uses ChatGPT for health-related questions. Not occasionally. Regularly. As a second opinion. As a place to test concerns before saying them out loud. Sometimes even as a therapist, a confidant, or a space where embarrassment does not exist. When habits become consistent, companies stop observing and start building. At that…This story continues at The Next Web

Source:: The Next Web

AGIBOT’s CES 2026 Takeover: Humanoid Robots Have Left the Lab for Good

Home » Archive by Category "Technology" (Page 58)

Everything HP Announced at CES 2026: New OmniBook, Chromebooks & More

Home » Archive by Category "Technology" (Page 58)

Digital Trends’ Top Tech of CES 2026 awards: our favorite picks from the show floor

Home » Archive by Category "Technology" (Page 58)

ChatGPT Health wants to be your medical AI assistant, but don’t expect a diagnosis

Home » Archive by Category "Technology" (Page 58)

By Shikhar Mehrotra ChatGPT Health is OpenAI’s answer to how people are already using AI for medical questions, offering curated sources, clearer guardrails, and safer health-focused responses.
The post ChatGPT Health wants to be your medical AI assistant, but don’t expect a diagnosis appeared first on Digital Trends.

Source:: Digital Trends

Companies can compete against AI by delivering what AI can’t

Home » Archive by Category "Technology" (Page 58)

The pressure for businesses to leverage generative AI (genAI) or agentic AI is massive, but when I hear executives complaining that there is no way to compete against businesses leveraging the fast-moving technology, I’m forced to chuckle. 

There is a powerful way to compete against genAI and it’s textbook simple: note all of the problems with the tech — the list is extensive — and build your value-add on those. Here’s a rundown of some of the competitive advantages companies can offer.

Reliability

GenAI systems have low reliability. That doesn’t mean most of the answers/recommendations it generates are necessarily wrong. It’s simply that errors occur frequently, with no particular pattern. 

That could be from a variety of causes, including: hallucinations; insufficient/outdated/poor-quality data that it was trained on; problems with the data that was used to fine-tune the model (the fine-tuning data might have also been accurate, but it somehow interfered with or contradicted the core training data); the system could have misinterpreted a query; the user could have mis-phrased a query; or a dozen other things.

Another data accuracy/reliability issue involves language. GenAI models’ accuracy plunges when it is dealing with non-English information. For the typical multinational company, that could be a massive problem. 

And these systems often ignore guardrails, meaning they may choose to ignore any restrictions you try and impose. 

Add that all up and IT directors simply cannot rely on these systems. It’s like working with a brilliant employee who will periodically make stuff up in an official report. When confronted, the employee is apologetic but stresses that he or she will continue to make stuff up. Can you trust that employee with important work?

I was recently talking with a cybersecurity vendor that decided to avoid genAI tools — and suffer all of its compliance, cybersecurity, accuracy and data leakage issues — and  instead simply rely on AI Machine Leaning. Alpha Level, which deals with event alert triage, uses an ML approach known as Time Series modeling. It also claims the cost is far lower, at least at the enterprise volume level.

Real-world expertise

Some executives talk about leveraging expertise as a way to compete against genAI. That is a decent point, but it has to be information that beats genAI.

Consider a law firm. Even more narrowly, consider case law, where attorneys try to find precedent for an argument they want to make. At one level, genAI tools can win that battle. They literally can memorize every word of every court decision — globally, if need be. No lawyer can do that.

But case law research is not merely about reading cases. The attorney needs to understand the intent, the nuances of a case and the relevant history. No genAI system can do that.

Early in my reporting career, I was a full-time court reporter for a daily newspaper. One afternoon, I found myself in the courthouse basement in the law library. In the back, I saw the managing partner of one of the state’s largest law firms, flipping through books. 

I asked him why he was doing such work when he could easily assign it to a more junior lawyer. He smiled and said, “I’ve been doing this for 40 years. I routinely find obscure cases that these young hotshots would never find. I simply know where to look and how to interpret them.”

That is precisely the kind of mastery that will elude genAI.

Another example is in an area close to where I live: journalism. Some media outlets are trying to use genAI to write stories. There are some very basic stories where that might work , such as routine weather reports, maybe sports scores and perhaps even obituaries.

But the ultimate story is what used to be known as “man bites dog” and today is simply “surprising the reader.” To do that, a reporter must find things that readers don’t know and that contradicts what they do know. That is exactly what genAI cannot do. Everything the technology churns out is simply a reworded version of what has already been said. 

If you look at fiction writers, such as those writing movie or television scripts, a similar discovery is made. GenAI could replace really bad writers. But the nature of genAI would almost certainly prevent it from writing hit shows where audiences are quoting lines the next day. 

Data Leakage

Data leakage and the related “lack of data control” is connected to how these systems grow. Are they training on the queries made? Will the information shared in a query on Monday find its way into an answer given to a competitor on Friday?

There are straight-forward ways to limit such leakage, whether through open source, on-prem closed systems, or even the extreme of using air-gapped systems. (CapitalOne is a great example of an enterprise toying with such limitations to safely use genAI.) 

If a business created a closed-loop system to deliver the flexibility of genAI without the data risks, that could potentially do absurdly well . 

Agentic AI

Agentic systems are simply begging for a company to devise a locked-down system for leveraging agents without the massive risks. 

In short, there are quite a few powerful ways to compete successfully with genAI and agentic systems. Just don’t ask ChatGPT to recommend any.

Source:: Computer World

5 areas of ITSM being transformed by automation in 2026

Home » Archive by Category "Technology" (Page 58)

Automation is transforming IT service management (ITSM), moving service desks from reactive, manual workflows toward systems that can intelligently route, prioritize, and resolve issues with minimal human intervention.

Recent research from Freshworks found that IT professionals lose nearly seven hours every week—almost a full workday—to fragmented tools and overly complicated work processes. Implementing ITSM automation reduces manual effort, accelerates resolution, improves consistency and accuracy, enables proactive issue prevention, and delivers faster, more reliable service that measurably improves employee and end-user satisfaction.

As hybrid infrastructure, distributed teams, and rising service expectations become the norm, automation must embed across real workflows, from incidents and requests to assets and changes, to deliver meaningful results.

How is automation improving ITSM?

ITSM automation streamlines the service lifecycle with event-driven workflows, intelligent triage, and rule-based actions. By automating repetitive processes like ticket handling and administrative tasks, businesses ensure routine tasks are executed quickly, predictably, and consistently.

Key elements of ITSM automation include:

Automated workflows: Route, escalate, and resolve tasks based on predefined rules

Service desk automation: Fast ticket creation, categorization, and assignment

SLA automation: Monitor deadlines, apply timers, and trigger alerts

Event-driven automation: Act on system alerts and performance thresholds

Intelligent triage: Sort and prioritize issues based on impact and urgency

These capabilities lay the foundation for faster, more reliable service delivery across IT operations.

What are the five core areas of ITSM being transformed by automation?

1. Incident and request management: Automation speeds up ticket creation, classification, assignment, and resolution, including:

Auto-categorization and routing

Suggested solutions from knowledge articles

Automated status updates and notifications

Faster mean time to resolution (MTTR)

2. Change and release management: Automation standardizes change approvals and helps reduce risk with:

Preconfigured approval workflows

Impact-based change routing

Automated deployment tasks

3. Asset and configuration management: ITSM platforms use automation to maintain accurate inventories and compliance through:

Automated asset discovery

Continuous configuration item (CI) updates

Event-based alerts for hardware and software drift

4. SLA compliance and governance: Automation helps teams maintain predictable service quality with:

SLA timers and escalation logic

Auto-remediation actions before an SLA breach occurs

Consistent, auditable process execution

5. Cross-department workflows: Automation orchestrates tasks across IT, HR, Facilities, and other teams, including:

Onboarding / offboarding workflows

Access provisioning

Multistep request fulfillment

How is AI accelerating ITSM automation?

AI amplifies ITSM automation by adding prediction, guidance, and pattern recognition beyond what traditional workflows offer. For example: 

Predictive insights: Identify recurring issues, detect anomalies, and anticipate outages for proactive service management

Intelligent triage: Analyze ticket data, recommend prioritization, categorize requests, and surface likely root causes

By combining AI with automation, IT teams can move from reactive service to predictive, data-driven operations.

How can companies start implementing ITSM automation?

Implementing ITSM automation doesn’t have to be overwhelming. Businesses can begin small and scale up over time:

Identify high-volume, repetitive tasks: Password resets, access requests, ticket classification, and similar frequent workflows. 

Standardize processes before automating: Clear, repeatable workflows ensure reliable results. 

Prioritize SLAs and high-impact workflows: Automate where it reduces risk, accelerates resolution, or improves user experience. 

Expand into cross-department orchestration: Connect IT with HR, Facilities, Legal, and other teams to streamline end-to-end processes. 

Integrate AI carefully: Use AI for triage recommendations, predictive analytics, and automated diagnostics to enhance—not replace—core workflows. 

Choose a unified ITSM platform: A centralized platform with built-in automation and AI capabilities ensures scalability and consistency. 

Apply ITSM automation at scale with Freshservice

For organizations looking to scale ITSM automation, Freshservice from Freshworks provides a centralized platform and the infrastructure needed to deploy AI-driven workflows, intelligent triage, and automated governance across service operations. The platform helps IT teams achieve faster resolution, improve SLA compliance, and increase operational resilience.

To learn more, visit us here.

Source:: Computer World

Naox launches earbuds that can take EEG to measure your brain activity

Home » Archive by Category "Technology" (Page 58)

By Manisha Priyadarshini Naox is bringing EEG brain tracking out of clinics with Naox Wave, a pair of wireless earbuds designed to monitor brain activity during daily life and translate complex signals into simple wellness insights through a companion app.
The post Naox launches earbuds that can take EEG to measure your brain activity appeared first on Digital Trends.

Source:: Digital Trends

Best Business Laptops of 2026: Top Rated & Expert Tested

Home » Archive by Category "Technology" (Page 58)

Who decides the best AI?

Home » Archive by Category "Technology" (Page 58)

By Ana-Maria Stanciuc The AI industry has become adept at measuring itself. Benchmarks improve, model scores rise, and every new release arrives with a list of metrics meant to signal progress. And yet, somewhere between the lab and real life, something keeps slipping. Which model actually feels better to use? Which answers would a human trust? Which system would you put in front of customers, employees, or citizens and feel comfortable standing behind it? That gap is where LMArena has quietly built its business, and why investors just put $150 million behind it at a $1.7 billion valuation, in a Series A round.…This story continues at The Next Web

Source:: The Next Web

Your headphones just got two cameras, meet Project Motoko

Home » Archive by Category "Technology" (Page 58)

By Paulo Vargas Razer Project Motoko is a CES 2026 concept headset with two eye-level cameras and multi-mic audio, built for hands-free POV capture and computer vision. A Developer Kit is planned for Q2 2026.
The post Your headphones just got two cameras, meet Project Motoko appeared first on Digital Trends.

Source:: Digital Trends

Everything Exciting Asus Announced at CES 2026: AR Glasses, Dual Screen Laptop & More

Home » Archive by Category "Technology" (Page 58)

Snapping 8K drone footage just got easier thanks to Potensic’s new upgrade at CES 2026

Home » Archive by Category "Technology" (Page 58)

By Tom Bedford The Potensic Atom 2 PTD-1 controller, showcased at CES 2026, lets you use a drone without needing to use your smartphone.
The post Snapping 8K drone footage just got easier thanks to Potensic’s new upgrade at CES 2026 appeared first on Digital Trends.

Source:: Digital Trends

Xreal puts a blazing-fast 240Hz display on your face with the ROG R1 AR glasses

Home » Archive by Category "Technology" (Page 58)

By Manisha Priyadarshini XREAL has partnered with Asus ROG to launch the ROG R1 AR glasses, with 240Hz micro-OLED displays and a 171-inch virtual viewing area, making wearable AR a serious option for high-performance gaming.
The post Xreal puts a blazing-fast 240Hz display on your face with the ROG R1 AR glasses appeared first on Digital Trends.

Source:: Digital Trends

Accenture to acquire UK AI startup Faculty

Home » Archive by Category "Technology" (Page 58)

Accenture has announced that it has agreed to acquire UK AI startup Faculty for an undisclosed sum, a potentially significant move in a consultancy sector currently scrambling to add greater artificial intelligence expertise.

According to Accenture, Faculty’s UK-based workforce of 400 “AI native professionals” will be integrated with its consulting teams, allowing the company to offer its customer base “world‑class AI capabilities.” The company will also integrate Faculty’s AI decision intelligence platform, Frontier, into its services.

“With Faculty, we will further accelerate our strategy to bring trusted, advanced AI to the heart of our clients’ businesses,” commented Accenture chair and CEO, Julie Sweet.

One detail that marks the acquisition as unusual is that Faculty’s current CEO, Marc Warner, will reportedly join Accenture’s Global Management Committee as chief technology officer (CTO). If confirmed, this means that a company employing a few hundred people will take a key board position in a huge consulting outfit with nearly 800,000 employees worldwide.

Accenture still lists its CTO as Rajendra Prasad, who will presumably step back from this role to focus on his other day job as the company’s Group Chief Executive – Technology. CIO.com contacted Accenture and Faculty to confirm the new roles, but had no response by publication time.

AI reinvention

Traditional tech acquisitions are usually motivated by the value offered by a company’s patents, products and customers. With AI companies, just as important right now is human expertise.

Faculty offers all of these. Co-founded in 2014 as ASI Data Science by then Harvard quantum physics research fellow Warner, it was renamed Faculty in 2019. This might have been an attempt to disassociate it from allegations, which it strenuously denied, that it was part of the same internship program as scandal-hit company Cambridge Analytica, through the latter’s parent company, SCL Group.

Since then, Faculty has established a solid reputation through its work with the UK government, including the creation of an NHS Early Warning System (EWS) system used to predict hospital admissions and ventilator requirements during the Covid pandemic.

This dovetails well with Accenture’s direction; it has spent the last year undergoing an AI makeover. In June, the company folded five business units into a single division, Reinvention Services, as part of a plan to “re-invent Itself for the Age of AI.” At the same time, it started calling its employees “reinventors”.

The company has also formed alliances with OpenAI and Anthropic which will see tens of thousands of its employees trained to use and promote both companies’ chatbot and agentic technologies.

“We are writing the playbook for how to be the most AI-enabled, client-focused professional services company in the world,” said Accenture CEO Sweet in this week’s announcement of the acquisition.

This article originally appeared on CIO.com.

Source:: Computer World

Is it time for Apple to introduce 5G Macs?

Home » Archive by Category "Technology" (Page 58)

Apple might introduce the first cellular-enabled MacBook model this year, according to a Bloomberg report (confirmed by Macworld) that claims the company began exploring the possibility in late 2024. Who would this benefit? How good an experience would it provide? And why is now (potentially) the right time for the company to make such a move?

Who would cellular Macs benefit?

The promise of cellular Macs is a good one. Equipped with cellular modems, these machines would liberate mobile professionals from searching for more secure communications and/or more bandwidth than you usually get over public Wi-Fi networks. Just like cellular on an iPhone or iPad, you should be able to work from anywhere and never be without a connection.

There are other advantages, particularly for enterprise professionals or companies using highly secure 5G-based private networks. Apple’s modems support 5G Network Slicing, which means they already support such deployments on iPhones, so there’s no reason Macs would not support this, too. That promise brings the highest quality secure mobile bandwidth to Mac-using pros.

It’s not as if you are unable to get your Mac online using your mobile network, as you can tether your system to your iPhone to do so. This is fine for most users, most of the time, offering some of the convenience of an always-available mobile connection.

There are drawbacks, however: the connection can at times be slow or inconsistent, your carrier needs to support such use, and tethering eats up iPhone battery life. A Mac equipped with its own cellular connection should deliver more consistent connections and leave your iPhone battery untouched. It should also be more able to move big chunks of data around in comparison to the limits sometimes set by public networks.

What about the user experience?

Apple has always understood that between the idea and reality you’ll find shadow; this could be particularly true when it comes to shipping Macs equipped with built-in cellular modems. These would suffer from all the challenges we already get using cellular connections. 

If you’ve ever travelled, you will likely have encountered poor cellular performance on trains (mainly because of the metal-coating on the windows), sketchy cellular connections depending on where you live in a city or town, and literal dead network zones inside some buildings. These challenges won’t disappear just because you’re using a cellular Mac. Those obstacles exist. This is the shadow — between user expectation and reality — that might have kept Apple from introducing cellular connections in Macs so far. 

Another problem is cost. Apple’s decision to create its own 5G modems wasn’t entirely based on its desire to own all the tech used in its products; it also reflected its dissatisfaction with the high prices modem supplier Qualcomm demanded for use of its radios in Apple devices.  The potential benefits to mobile professionals just didn’t justify hiking the cost of Macs to everybody else.

Why now?

Apple now makes its own 5G modems, so it’s reasonable to think it could put these inside Macs at a more acceptable cost. Apple also owns the whole stack, which means the 5G modem should work happily alongside other networking features — Bluetooth, Wi-Fi — on Macs. 

Tie this up with eSIM technologies and you end up with a Mac you can get online almost anywhere. Presumably you’ll even be able to send messages via satellite directly from your Mac or make use of any additional satellite-based networking services Apple delivers in the future.

That the company also makes the processors inside Macs should help build efficiency, reducing unnecessary energy drain. In other words, the sublime combination of Apple’s processor and networking silicon designs along with the Apple-created operating system makes a cellular Mac more possible than ever. 

It also gives Apple an option to upsell to customers, just as it presently sells iPads equipped with cellular connections for an additional charge. Given the economic calamities the company continues to navigate, it might be that the potential business opportunity is the main reason the company is considering the move. (Never let it be said that Apple management easily resists an opportunity to boost revenue by giving customers more of what they need.)

Will Apple introduce a cellular Mac?

 The future is hard to predict. But it’s worth noting that in 2024, over 45% of Fortune 500 companies issued cellular-connected laptops to employees, up from 28% in 2022. Given Apple’s growth in the enterprise sector, why would it fail to offer cellular Macs as a purchasable BTO option for business customers?

You can follow me on social media! Join me on BlueSky,  LinkedIn, and Mastodon.

Source:: Computer World

REGISTER NOW FOR YOUR PASS
 
To ensure attendees get the full benefit of an intimate technology expo,
we are only offering a limited number of passes.
 
Get My Pass Now!