By Megan Carnegie A decade ago, startups often equated success with rapid headcount growth. The formula was simple: build a product, raise a round, hire fast. Bigger teams meant bigger bets. But the rulebook is getting rewritten as a new generation of startups scales with leaner teams and fewer people. They’re not building out sprawling customer support or sales teams, and seem to be automating what once warranted entire departments. Their growth is quite remarkable. Cursor, which became the fastest-growing SaaS company in history, generated $200mn in revenue with 30 employees. Midjourney made $200mnn with 40. Ben Lang’s site Tiny Teams tracks these…This story continues at The Next Web
Source:: The Next Web
Will the use of generative AI (genAI) tools degrade human intelligence over the long term?
That question — and lingering concerns about cheating and hallucinations — are among the issues US university professors are grappling with as a new academic year approaches. Even as some embrace the technology, others look askance at generative AI tools.
But because of how quickly it’s been adopted in the business world, completely shutting out AI in classrooms could hamper students’ professional development, several professors told Computerworld.
“If integrated well, AI in the classroom can strengthen the fit between what students learn and what students will see in the workforce and world around them,” argued Victor Lee, associate professor at Stanford’s Graduate School of Education.
GenAI companies are certainly doing their part to lure students into using their tools by offering new learning and essay-writing features. Google has gone so far as to offer Gemini free for one year, and OpenAI late last month introduced “Study Mode” to help students “work through problems step by step instead of just getting an answer,” the company said in a blog post.
Selecting the “study mode” in ChatGPT details how the genAI tool reaches an answer to a query. Google has a similar experimental tool called “Learn About” and a tool called NotebookLM, which recently got a host of new genAI features.
Grammarly has also introduced new genAI tools to help students with assignments. The AI Grader agent “looks at your assignment rubric and gives you suggestions like your professor would, and a grade prediction before you submit your work,” Grammarly CEO Shishir Mehrotra explained in a LinkedIn entry. The agents work in existing user interfaces; users don’t need to cut and paste or type in prompts.
The agents can flag unsupported claims in students’ writing and explain why evidence is needed and recommend the use of credible sources, Luke Behnke, vice president of product management at Grammarly, said in an interview. “Colleges recognize it’s their responsibility to prepare students for the workforce, and that now includes AI literacy,” Behnke said.
Universities are also implementing AI in their own learning management systems and providing students and staff access to Google’s Gemini, Microsoft’s Copilot and OpenAI’s ChatGPT.
For example, Duke University in North Carolina provides all staff and students free access to OpenAI’s GPT-5, including the mathematics and coding tools. “University offices that work on enhancing teaching and learning quality are working really hard to guide faculty on ways to use AI,” Lee said.
Longji Cuo, an associate professor at the University of Colorado, in Boulder, teaches a course on AI and machine learning to help mechanical engineering students learn to use the technology to solve real-world engineering problems.
Cuo encourages students to use AI as an agent to help with teamwork, projects, coding, and presentations in class. “My expectation on the quality of the work is much higher,” Cuo said, adding that students need to “demonstrate creativity on the level of a senior-level doctoral student or equivalent.”
Cuo asks students not to simply accept whatever results advanced genAI models spit out, as they may be riddled with factual errors and hallucinations. “Students need to select and read more by themselves to create something that people don’t recognize as an AI product,” Cuo said.
Some professors are trying to mitigate AI use by altering coursework and assignments, while others prefer not to use it at all, said Paul Shovlin, an assistant professor of AI and digital rhetoric at Ohio University.
But students have different requirements and use AI tools for personalized learning, collaboration, and writing, as well as for coursework workflow, Shovlin said. He stressed, however, that ethical considerations, rhetorical awareness, and transparency remain important in demonstrating appropriate use.
“GenAI isn’t a light switch that is flipped on or off,” he said. “GenAI has consolidated a lot of capabilities in one site.”
Shovlin’s new media composition course uses genAI as a tool to build assets and complement skillsets needed for student multimedia projects. “This means students can focus on the larger assignment and not sweat some basic building blocks that aren’t the basis for the class,” he said.
For example, in the graphic novel assignment, Shovlin demonstrates how image creation tools can create assets that can be integrated into graphic design in a collage-like manner.
“Drawing is not a learning outcome for the class, but successfully engaging in a substantial multimedia composition is …,” he said.
GenAI can be useful in the classroom as long as a student is learning, asking critical questions, and developing skills, said Jack Gold, principal analyst at J. Gold Associates. “It is very helpful if you know the right questions to ask and if you find a competent AI model that knows about your subject matter,” Gold said.
But lazy students who rely on genAI tools to write papers are only undermining their skills development, he said.
Gold predicted that one day, AI agents will be able to work with students on their personalized education needs. “Rather than having one teacher for 30 students, you’ll have one AI agent personalized to each student that will guide them along.”
Source:: Computer World
US companies have invested between $35 billion and $40 billion in generative AI (genAI) projects, but most efforts are stuck in the pilot stage, according to a report from MIT’s NANDA initiative. Only about 5% of the efforts lead to rapid revenue growth; the majority produce little or no impact, Fortune reports.
The core problem is apparently not the quality of the models being used, but a lack of integration, learning and alignment with corporate workflows. Companies often invest in sales and marketing solutions, but the biggest returns seem to be in back-office automation and streamlining internal processes.
The report also found that successful companies tend to buy specialized solutions and build partnerships, while in-house development projects fail significantly more often.
Source:: Computer World
By Deepti Pathak BGMI, or Battlegrounds Mobile India, is probably the most fun battle royale game, and for good…
The post BGMI Redeem Codes For August 19 appeared first on Fossbytes.
Source:: Fossbytes
By Deepti Pathak Most Apple Watches last only a day or two at most. However, if yours is dying…
The post 7 Ways To Fix Apple Watch Battery Draining Fast (2025) appeared first on Fossbytes.
Source:: Fossbytes
By Andrea Hak Since ChatGPT’s debut in 2022, generative AI quickly entered our work, study, and personal lives, helping to speed up research, content creation, and more at an unprecedented rate. Enthusiasm for generative AI tools has understandably gained traction experiencing an even faster adoption rate than the Internet or PCs, but experts warn we should proceed with caution. As with every new technology, generative AI can launch society forward in a number of ways, but it can also bring consequences if left unchecked. One of those voices is Natasha Govender-Ropert, Head of AI for Financial Crimes at Rabobank. She joined TNW founder…This story continues at The Next Web
Source:: The Next Web
Anthropic has introduced a new feature in its Claude Opus 4 and 4.1 models that allows the generative AI (genAI) tool to end a conversation on its own if a user repeatedly tries to push harmful or illegal content.
The new behavior is supposed to only be used when all attempts to redirect a conversation have failed or when a user asks for the conversation to be terminated. It is not designed to be activated in situations where people risk harming themselves or others. Users can still start new conversations or continue a previous one by editing their replies.
The purpose of the feature is not to protect users; it’s to the model itself. While Anthropic emphasizes it does not consider Claude to be sentient, tests found the model showed strong resistance and “apparent discomfort” to certain types of requests. So, the company is now testing measures for better “AI wellness” — in case that becomes relevant in the future.
Source:: Computer World
In its efforts to deploy AI tools, international services firm Wolters Kluwer has created frameworks to ensure responsible AI development with continuous human oversight.
The Dutch company has woven AI into its core products for more than a decade, products that now drive about 50% of digital revenue. Wolters Kluwer’s strategy is to create an “AI toolbox” from which it can chose which models best fit any single business task. The company also learned a key truth about the fast-moving technology: without clean data, AI produces errors and hallucinations.
Deep integration of AI — instead of relying on add-ons — has been a core approach to rolling out the technology at the nearly 200-year-old firm. For example, in its Tax & Accounting division, Wolters Kluwer pursues a strategy called “Firm Intelligence,” which leverages AI, its own content, and embedded platform integration to anticipate internal workforce and customer needs.
The Netherlands-based company has also established what it calls Responsible AI Principles that emphasize transparency, explainability, privacy, fairness, governance, and human-centric design.
In this Q&A, Wolters Kluwer CIO Mark Sherwood explained how his company has seen efficiencies in AI-assisted code generation and a closing of skills gaps.
srcset=”https://b2b-contenthub.com/wp-content/uploads/2025/01/Mark-Sherwood-Wolters-Kluwer.jpg?quality=50&strip=all 1800w, https://b2b-contenthub.com/wp-content/uploads/2025/01/Mark-Sherwood-Wolters-Kluwer.jpg?resize=300%2C200&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2025/01/Mark-Sherwood-Wolters-Kluwer.jpg?resize=768%2C512&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2025/01/Mark-Sherwood-Wolters-Kluwer.jpg?resize=1024%2C683&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2025/01/Mark-Sherwood-Wolters-Kluwer.jpg?resize=1536%2C1024&quality=50&strip=all 1536w, https://b2b-contenthub.com/wp-content/uploads/2025/01/Mark-Sherwood-Wolters-Kluwer.jpg?resize=1240%2C826&quality=50&strip=all 1240w, https://b2b-contenthub.com/wp-content/uploads/2025/01/Mark-Sherwood-Wolters-Kluwer.jpg?resize=150%2C100&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2025/01/Mark-Sherwood-Wolters-Kluwer.jpg?resize=1046%2C697&quality=50&strip=all 1046w, https://b2b-contenthub.com/wp-content/uploads/2025/01/Mark-Sherwood-Wolters-Kluwer.jpg?resize=252%2C168&quality=50&strip=all 252w, https://b2b-contenthub.com/wp-content/uploads/2025/01/Mark-Sherwood-Wolters-Kluwer.jpg?resize=126%2C84&quality=50&strip=all 126w, https://b2b-contenthub.com/wp-content/uploads/2025/01/Mark-Sherwood-Wolters-Kluwer.jpg?resize=720%2C480&quality=50&strip=all 720w, https://b2b-contenthub.com/wp-content/uploads/2025/01/Mark-Sherwood-Wolters-Kluwer.jpg?resize=540%2C360&quality=50&strip=all 540w, https://b2b-contenthub.com/wp-content/uploads/2025/01/Mark-Sherwood-Wolters-Kluwer.jpg?resize=375%2C250&quality=50&strip=all 375w” width=”1024″ height=”683″ sizes=”auto, (max-width: 1024px) 100vw, 1024px”>Mark Sherwood, Wolters Kluwer
Wolters Kluwer
AI-assisted code generation tools are increasingly prevalent in software engineering. How has AI-assisted development changed your software development lifecycle? “We are beginning to see improvements in our software development lifecycle leveraging AI-assisted development. We are reducing the time it takes to generate code while vastly reducing the number of errors and the time it takes to test the new code. Our current targets are a 25% reduction in both metrics, and we are seeing signs that those goals are very achievable.”
Which AI tools have provided the most value to your engineering teams so far? “We use a mixture of LLMs [large language models], automated test assistants and domain-specific AI models, and we’ve found some of the native third-part tools are very good at some specific tasks. We have not (yet) found one tool that can do it all, but that may never be the case. We have chosen to go down the route of bringing an ‘AI toolbox’ with a number of different tools and picking the one(s) that we believe are best suited for the task at hand.”
Will AI-assisted code generation tools eliminate the need for as many software developers. Have you seen that in your own organization? “We firmly believe that AI-assisted code-generation tools will change the structure of software development teams over time, with fewer people needed for repetitive coding tasks. We believe this is particularly the case for more entry-level coding work, but we also see this as an opportunity to shift more junior talent into more advanced and creative projects early on in their careers.
“While we have not eliminated any existing roles to date due to AI-assisted code generation tools, we have reduced the number of open requisitions we used to have for software developers. We do not view AI as a way to eliminate current job roles, but more to allow software developers to work on other high-value tasks.”
How are you managing code quality, testing, and security with AI-generated code? “We are using AI to help with testing both AI-generated and human-generated code. It’s still early days so we have engineers involved, but we see a day in the very near future where we’re able to have AI test all code without needing human intervention. We do have security checks in place — they are a key part of our DevSecOps strategy, which lends itself well to leveraging the advantages of what AI brings.”
Can you be more specific about the guardrails that are in place? “We have an AI Center of Excellence at Wolters Kluwer that, along with our Global Information Security team, are establishing an AI governance framework. This framework includes key aspects of boundaries that keep our AI systems behaving in safe, secure and predictable ways. We are putting controls into place, much like with the rest of our internal security program, that help to address any potential risks around policy compliance.
“We are still learning here, but it’s something that is a top priority for all the teams…. We err on the side of making the controls too tight in the beginning and then may need to adjust over time. We ensure that these controls are working properly by running some edge cases through the process to make sure it’s catching things we need it to catch.”
AI agents need a system or framework that manages, coordinates, and directs multiple AI agents to work together toward a common goal or across complex task – an orchestrator. Have you used an orchestrator, and if so, what have been the challenges with interoperability? “Yes, we have dealt with, and are dealing with, both AI agents and orchestrators. We are currently building out AI agents across a number of areas within IT, including service desk, incident management and disaster recovery, to name a few. And yes, in order to create AI enabled workflows rather than just single AI agents, we are also using AI orchestrators.
“Our biggest challenges so far have been about making sure we are properly integrating with the other numerous models, tools and data sources as we automate these larger AI systems. On the plus side, using these AI orchestration tools has helped us to improve efficiency and drive better compliance and governance throughout the process.”
Is AI helping you close skills gaps or reduce dependency on specific roles? “AI is helping us both close skills gaps and reduce dependency on certain roles. The amount of interest and knowledge in AI and AI tools is increasing at a rapid pace and we are building up our own internal knowledge very quickly. In the initial phases, it’s more about improving the skills of software engineers and some more technical business roles. Going forward, it will allow us to reduce dependencies in a number of areas of engineering, both external and internal facing.”
How are large corporations — especially in regulated sectors like healthcare, finance and legal — deploying AI at scale while managing risk, data security and compliance challenges? “Managing risk is one of our highest priorities and a robust data security program is a critical piece of that strategy. We have safeguards in place to make sure we are only using our own internal data, which represents nearly 200 years of proprietary information, and we go to great lengths to ensure that data is managed and protected.”
What governance policies do you have around using generative AI? “We have created an AI Center of Excellence that has members from all organizations across the company, including our product development organization and our internal information technology organizations, which are driving this.
“The focus is on the product development organization, but both Product Development and Information Technology teams are key participants. Part of the charter of the team is to create and help enforce the governance policies around AI usage, including tools, and making sure that we prioritize the work being done across the teams.”
What’s coming next, including AI agents, quantum security risks and why data quality is essential to successful digital transformation? “AI is progressing at a rapid pace. We’re already developing AI agents and are working through the implications of having AI ’employees.’ It’s exciting to see a mindset shift moving from thinking of AI as just a tool to viewing it as an operator. These systems will take on tasks, make decisions and function independently. This will have real implications for how we design products, structure workflows and approach accountability.
“Of course, none of this works without good data. AI models are only as effective as the information they’re trained on and without a strong data strategy, effective governance, and enterprise-wide participation, companies won’t be able to fully leverage AI agents. That’s why we place such a strong emphasis on ensuring our nearly 200 years of data at Wolters Kluwer remains accurate and reliable.”
Source:: Computer World
By Hisan Kidwai vivo’s V series has always been about design and cameras. After all, the two are the…
The post vivo V60 Review: The Best Camera Phone Under 40K? appeared first on Fossbytes.
Source:: Fossbytes
By Hisan Kidwai Gaming phones have always been up in the super premium price segment, appealing only to the…
The post OPPO K13 Turbo Review: Budget Gaming Beast appeared first on Fossbytes.
Source:: Fossbytes
By Michael Grupp Remember the movie Dodgeball? That ridiculous scene where the coach makes his team run across a busy highway? The logic: “If you can dodge traffic, you can dodge a ball.” Europe’s approach to AI feels similar: if you can survive our labyrinth of rules, you can survive anywhere. Conversations with European companies about AI rarely begin with “What can it do?” Instead, they open with a sigh and ask, “Are we allowed to use this?” For most industries, that’s a creativity-killer, but legal professionals thrive in regulatory swamps. Europe’s swamp is about to become its competitive moat. The paradox: red…This story continues at The Next Web
Source:: The Next Web
By Hisan Kidwai Free Fire Max is one of the most popular games on the planet, and for good…
The post Garena Free Fire Max Redeem Codes for August 17 appeared first on Fossbytes.
Source:: Fossbytes
By Hisan Kidwai Free Fire Max is one of the most popular games on the planet, and for good…
The post Garena Free Fire Max Redeem Codes for August 16 appeared first on Fossbytes.
Source:: Fossbytes
A new study from AI security provider CalypsoAI reveals a “growing use and misuse of AI” within US organizations by employees at all levels, including C-suite executives.
Of note, it said, “half (50%) of executives say they’d prefer AI managers over a human, although 34% aren’t entirely sure they can tell the difference between an AI agent and a real employee. Over a third of business leaders (38%) admit they don’t know what an AI agent is — the highest of any role. Almost the same proportion (35%) of C-suite executives said they have submitted proprietary company information so AI could complete a task for them.”
Those findings and others are contained in the firm’s study, The Insider AI Threat Report, which states, “hidden reality inside today’s enterprises: employees at every level are misusing AI tools, often without guilt, hesitation, or oversight.”
For many, it is fine to break rules
The survey of more than 1,000 US workers revealed that, overall, 45% of employees say they trust AI more than their co-workers, 52% of employees would use AI to make their job easier, even if it violates company policy, and 67% of executives say they’d use AI even if it breaks the rules.
The misuse, said the Dublin-based company in the release, even extends to “highly regulated industries” and includes:
60% of respondents from the finance industry admitting to violating AI rules, with an additional one-third saying they have used AI to access restricted data.
42% of employees in the security industry knowingly using AI against policy, and 58% saying they trust AI more than they do their co-workers.
A mere 55% of workers in the healthcare industry following their organization’s AI policy, and 27% saying they “would rather report to [AI] than a human supervisor.”
Asked what prompted the study, CalypsoAI CEO Donnchadh Casey said via email on Friday, “we wanted hard data on what is happening inside enterprises with AI adoption. External threats often get the attention, but the immediate and faster-growing risk is inside the building, with employees at all levels using AI without oversight. Our customers are already telling us they are seeing this risk grow. The research confirms it.”
‘Shadow AI now the new shadow IT’
He said his initial reaction to the findings as they began to pour in, especially when it comes to C-suite leaders’ habits with AI, was that it was surprising to see how quickly the C-suite is bypassing its own rules.
Senior leaders, said Casey, “should set the standard, yet many are leading the risky behavior. In some cases, they are adopting AI tools and agents for business tasks faster than the teams responsible for securing them can respond. Our customers see the same pattern across industries, which is why this is as much a leadership challenge as it is a governance challenge.”
Justin St-Maurice, technical counselor at Info-Tech Research Group, said, “Shadow AI has become the new shadow IT. Employees are using unsanctioned tools to get real work done because AI can deliver two things they actually feel: Cognitive offload takes the drudge work off of their plates, and cognitive augmentation is helping them to think, write, and analyze faster.”
CalypsoAI’s numbers, he said, “show how strong that pull is. Their data shows that more than half of workers say they would use AI even if their organization’s policy says no, a third have already used it on sensitive documents, and almost half of surveyed security teams admitted to having pasted proprietary material into public tools. I’m not sure it’s as much about disloyalty as it is about how governance and enablement lag behind how people work today.”
The risk here is clear, added St-Maurice, because every unmonitored prompt can lead to intellectual property, corporate strategies, sensitive contracts, or customer data leaking out to the public. “And naturally,” he noted, “if IT blocks these AI services, it’ll drive users further underground to look for new ways to access them. The practical fix is through structured enablement.”
A proper strategy, he said, is to provide a sanctioned AI gateway, connect it to identity, log prompts and outputs, apply redaction for sensitive fields, and publish a few clear and plain rules that people can remember. This should be paired with short, role-based training and a catalog of approved models and use cases. This gives employees a safe path to the same gains.
Casey agreed, noting that any solution geared toward correcting the problem of unauthorized AI use must address both people and technology.
“Many enterprises’ initial reaction is to block AI entirely, but this is counterproductive, as employees often circumvent rules to capture AI productivity gains,” he said. “A better approach is to give access to AI across the organization, but monitor and control this access to step in when behavior deviates from policy.”
This, he said, means organizations should have clear, enforceable policies paired with real-time controls that secure AI activity wherever it happens, which includes oversight of AI agents used for business tasks that can operate at scale and touch sensitive data.
“By securing AI where it is deployed and doing real work, enterprises can allow its use without losing visibility or control,” he said.
The survey was conducted in June by research firm Censuswide, who surveyed 1,002 full-time office workers in the US, aged 25-65.
Source:: Computer World
By Thomas Macaulay The subscription model beloved of software is now creeping into cars. Volkswagen has become the latest automaker to adopt the pricing structure. The German marque has introduced a monthly subscription fee to access the full performance of some of its ID.3 electric vehicles. Auto Express spotted that the Volkswagen ID.3 Pro and Pro S were listed in the UK as producing 201bhp, but could hit 228bhp — if customers paid extra. For that extra 27bhp, buyers can pay £16.50 per month, £165 annually, or £649 for a lifetime subscription that transfers with the car if it’s resold. Volkswagen described the…This story continues at The Next Web
Source:: The Next Web
By Hisan Kidwai Free Fire Max is one of the most popular games on the planet, and for good…
The post Garena Free Fire Max Redeem Codes for August 15 appeared first on Fossbytes.
Source:: Fossbytes
The AI boom has created a record-breaking wave of new billionaires and high company valuations in the past year at a rate unprecedented in history, according to CNBC.
CB Insights reports there are now 498 AI “unicorns” (private companies worth at least $1 billion) with a combined value of $2.7 trillion — and 100 of them were founded after 2023. In total, more than 1,300 AI startups have been valued at $100 million or more.
Large rounds of capital for companies like Anthropic, OpenAI, Anysphere and Safe Superintelligence have created new paper fortunes, with several founders now multi-billionaires, at least on paper.
Unlike the dotcom era, many AI companies are staying private longer, thanks to constant capital injections from venture capitalists, sovereign wealth funds and private investors. Liquidity is instead created through secondary markets, takeovers, and mergers.
Not surprisingly, the phenomenon is heavily concentrated in the San Francisco area.
Source:: Computer World
Just in time to build up the late-summer Apple product refresh hype, news of a big collection of product upgrades has magically appeared as Cupertino puts the finishing touches to its iPhone keynote event invite.
To help Apple fans monitor the impact of the latest leaks on their blood oxygen levels, the company also helpfully reintroduced some support for oxygen tracking at about the same time the latest leaks appeared. The speculation follows tantalizing promises from Apple CEO Tim Cook who in early August told an all-hands Apple employee meeting that he’d “never felt so much excitement’ about the products the company has planned.
“The product pipeline, which I can’t talk about — it’s amazing, guys. It’s amazing,” Cook said. “Some of it you’ll see soon, some of it will come later, but there’s a lot to see.”
So, what’s with the latest speculation?
Chips with everything
If the code is correct, Apple is about to breathe new life into numerous products with chip upgrades to make them more powerful. It also plans to introduce brand new product families, though the schedule for many of the following products is likely to extend into 2026. There’s exciting news across Apple’s product range:
iPods: Apple’s plotting a new iPad mini with the same high-end A19 Pro chip you’ll find inside iPhone 17 Pro models. Also, as it seeks to broaden its market to guard against economic unpredictability, Apple plans a spring launch for a new low-cost iPad equipped with an A18 chip, a big upgrade from the A16 inside the current entry-level model.
Vision Pro: As expected, Apple will put an M5 chip inside the Vision Pro.
Home: Apple’s new HomePod mini will hold an updated S-series chip, while Apple TV gains an A17 Pro processor — a substantial boost beyond the current A15 Bionic chip. It will likely need this power for those sofa-based Apple Intelligence interactions. It might also need this for the smart home it plans, including a Ring-competing security camera with movement and person detection, and its first attempts at home robotics.
Mac: Not only do we now expect the first M5 Mac models and also a $599 MacBook, but the latest news claims Apple intends introducing a next-generation Studio Display 2. Code-named J427, this isn’t expected to debut until 2026. But the introduction of a new A19 Pro internal processor should make it capable of accurately handling modern and future video and audio codecs.
iPhone: In a chime of loud synchronicity, all these new product speculations come just in time to raise the temperature as Apple prepares to introduce the first of its new iPhone 17 range. They’re expected to bring more memory than ever, double the entry-level storage for a small price increase, and 8x optical zoom (on Pro models). Expect iPhone 17, iPhone 17 Air, iPhone 17 Pro, and iPhone 17 Pro Max models.
Apple Watch: Expect a third-generation Apple Watch Ultra, with a faster chip, larger display and 5G (including satellite messaging support); Apple Watch Series 11 gets a processor upgrade and the first Apple Watch SE upgrade since 2022.
AirPods and AirTags: Apple is also preparing upgrades to AirPods Pro and AirTags. The latter are expected to be more accurate and operate at longer range, while AirPods Pro 3 promise improved noise cancellation and advanced sleep and hearing health features.
Is that all there is?
This extensive collection of rumored product upgrades seems to provide a good glimpse at the next 12 months of planned introductions, but there are some things missing from the list. Apple will no doubt introduce new Macs and iPad Pro models during this time, even as it will at last introduce the contextually aware Apple Intelligence it promised us at WWDC last year.
The latter introduction will probably create opportunity for additional Apple products and services, including the company’s take on a digital health coach, integration between those health services and the company’s existing products, and the opportunity to develop additional solutions, such as health assistant robotics.
Even further out, we know Apple hopes to introduce glasses equipped with visionOS as well as its first-ever folding iPhone.
Apple clearly has a lot to talk about.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
Source:: Computer World
By Lena Hackelöer There’s no doubt that Europe has ambition. Over the last decade, we’ve laid the foundation for a thriving digital economy, from regulatory leadership to tech-driven reforms and rapidly growing regional hubs. But infrastructure alone doesn’t build the future; people do. And today, we face the very human challenge of how to win — and retain — the talent that powers innovation. We’re seeing highly skilled individuals, such as founders, engineers, and product leaders, move their operations or careers to the US and, in some cases, to Asia. This trend reflects global competition at its fiercest. But it’s also a moment…This story continues at The Next Web
Source:: The Next Web
By Hisan Kidwai Free Fire Max is one of the most popular games on the planet, and for good…
The post Garena Free Fire Max Redeem Codes for August 14 appeared first on Fossbytes.
Source:: Fossbytes
Click Here to View the Upcoming Event Calendar