By Thomas Macaulay The race to dominate AI infrastructure has left Europe trailing the US — but the continent still has a shot at global leadership in AI apps. That was the verdict of Dutch tech leaders at the Assembly, the invite-only policy track of TNW Conference in Amsterdam. While Silicon Valley controls the scaffolding for AI, they urged Europe to focus on building apps on top. Leading the call was Jeroen van Glabbeek, CEO and founder of CM.com, a customer engagement platform with a market cap of around €217mn and annual revenues of €274mn in 2024. Van Glabbeek believes the US advantage in…This story continues at The Next Web
Source:: The Next Web
By Partner Content PDF users can face different struggles when it comes to editing, large file sizes, security risks,…
The post Best PDF Editors for Everyone: UPDF 2.0 for Windows, Mac, iOS and Android appeared first on Fossbytes.
Source:: Fossbytes
By Partner Content Can you download YouTube videos legally? When your liked or favorite YouTube videos get deleted or…
The post 9 Free Ways to Download YouTube Videos in Laptop appeared first on Fossbytes.
Source:: Fossbytes
In a move that could redefine the boundaries between generative AI (genAI) and intellectual property, Disney and Universal have joined forces to file a lawsuit against Midjourney, one of the world’s most popular AI image generators.
You may think you’ve heard this story before — The New York Times‘ 2023 lawsuit against OpenAI and Microsoft and News Corp. vs. Perplexity — but this case is different. For one thing, this is the first time major Hollywood studios with far more cash to prosecute the case have directly targeted a genAI company for copyright infringement. For another, Disney and Universal are both big AI users.
Disney and Universal allege that Midjourney’s platform is a “bottomless pit of plagiarism.” With Midjourney, all a subscriber need do to create unauthorized images of iconic characters such as Darth Vader, Elsa, the Minions, Shrek, and many others is to type in a prompt.
Original ‘Iron Man’ image is on the left; genAI-created image is on the right.
Disney/Universal lawsuit
Original image is on the left; genAI image is on the right.
Disney/Univeral lawsuit
There’s no question anyone can do it. If you don’t feel like trying it yourself, just look at some of the images in the Disney/Universal lawsuit complaint (shown above).
Can you tell which ones are the original from Avengers: Infinity War and which were generated by Midjourney? I can’t, and I have a good eye for this kind of thing. GenAI image creation has come a long way since all you had to do was count the number of fingers. (The originals are on the left.)
This didn’t require some kind of fancy prompt. As researchers have found, all you had to do to generate them was name the character and use the keyword “screencap,” and you quickly received your fake image. Or you could simply ask for “master super villain” or “armored superhero.”
“This is not a ‘close call’ under well-settled copyright law,” the lawsuit claims.
Correct. It’s not close at all.
In the company’s defense — if you can call it that — Midjourney CEO David Holz is on record as saying his AI has been trained on “just a big scrape of the Internet.” What about copyrights on these images?
“There isn’t really a way to get a hundred million images and know where they’re coming from. It would be cool if images had metadata embedded in them about the copyright owner or something. But that’s not a thing; there’s not a registry. There’s no way to find a picture on the Internet, and then automatically trace it to an owner and then have any way of doing anything to authenticate it.”
I think when it comes to Disney, it’s pretty darn obvious who owns the images. I mean, this is Disney, the big bad wolf of copyright. After Walt Disney lost the copyright to his earlier character, Oswald the Lucky Rabbit, he made darn sure that, starting with Mickey Mouse in 1928, he’d lock down its intellectual property for as close to forever as he could.
Indeed, over the decades, Disney has been behind laws to increase copyright coverage from a maximum of 56 years in 1928 to 75 years with the Copyright Act of 1976, and then 95 years with the Sonny Bono Copyright Term Extension Act (CTEA) of 1998, better known as the “Mickey Mouse Protection Act.”
Disney has also never been shy about suing anyone who’d dare come close to their copyrighted images. For example, in 1989, Disney threatened legal action against three daycare centers in Hallandale, FL., for painting murals of Disney characters such as Mickey Mouse, Donald Duck, and Goofy on their walls.
Why? Because it’s all about the Benjamins.
Disney, and to a lesser extent Universal, live and die from monetizing their intellectual property (IP). Mind you, much of that IP is generated from the public domain. As the Center for the Study of the Public Domain noted: “The public domain is Disney’s bread and butter. Frozen was inspired by Hans Christian Andersen’s The Snow Queen. … Alice in Wonderland, Snow White, The Hunchback of Notre Dame, Sleeping Beauty, Cinderella, The Little Mermaid, and Pinocchio came from stories by Lewis Carroll, The Brothers Grimm, Victor Hugo, Charles Perrault, Hans Christian Anderson, and Carlo Collodi.”
What Disney did with the public domain, MidJourney, and the rest of the AI companies want to do with pretty much everything on the Internet. OpenAI CEO Sam Altman, for instance, has consistently argued that training genAI on copyrighted data should be considered “fair use.” He’s not alone.
On the other side of the fence, Disney and Universal’s lawsuit is not just about damages, which the pair puts at $150,000 per infringed work, but about setting a precedent. They want to stop Midjourney’s image and soon-to-be-launched video generation services in their tracks.
At the same time, the film studios freely admit they’re already using genAI themselves. Disney CEO Bob Iger has said the technology is already making Disney’s operations more efficient and enhancing creativity. “AI might indeed be the most potent technology our company has ever encountered, particularly in its capacity to enhance and allow consumers to access, experience, and appreciate our entertainment.” He also, of course, stressed that, “Given the speed that it is developing, we’re taking precautions to make sure of three things: One, that our IP is being protected. That’s incredibly important.”
This lawsuit is more than a Hollywood squabble; it’s a watershed moment in the ongoing debate over genAI, copyright, and the future of creative work. Previous cases have challenged the boundaries of fair use and data scraping, but none have involved the entertainment industry’s biggest players.
It might seem like a slam dunk for the Hollywood powerhouses. The images speak for themselves. But, if there’s one thing I’ve learned in covering IP cases, it’s that you never know what a court will decide.
Besides, there’s a real wild card. Donald Trump’s AI Action Plan is still a work in progress. The AI companies are arguing that it should give them permission to use pretty much anything as grist for their large language models (LLMs), while the media companies want all the copyright protection they can get.
Which way will Trump’s officials jump? We don’t know. But I have a bad feeling about where they’ll go.
You see, what we do know is that after the Copyright Office released a pre-publication version of its 108-page copyright and AI report, which strived to strike a middle ground “by supporting both of these world-class industries that contribute so much to our economic and cultural advancement.” However, it added that while some generative AI probably constitutes a “transformative” use, the mass scraping of all data did not qualify as fair use.
The result? The Trump administration, while not commenting on the report, fired Shira Perlmutter, the head of the Copyright Office, the next day. She’s been replaced by an attorney with no IP experience.
Oh, also, hidden away in Trump’s “One Big Beautiful Bill” is a statement that imposes a 10-year ban on the enforcement of any state or local laws or regulations that “limit, restrict, or otherwise regulate” AI models, AI systems, or automated decision systems. If that becomes law, whatever is in Trump’s AI Action Plan is what we’ll have to live with for the next few years.
As an author, I can’t tell you how unhappy that prospect makes me. I expect Trump to side with the AI companies, which means I can look forward to competing with my own repurposed work from here on out.
Further reading:
AI vs. copyright
Court tosses hallucinated citation from Anthropic’s defense in copyright infringement case
Eleuther AI releases 8TB collection of licensed and open training data
>
Source:: Computer World
By Thomas Macaulay Europe must take bigger bets on young founders to build tomorrow’s tech giants, industry leaders urged today. Speaking at TNW Conference, investors and CEOs called for stronger support for entrepreneurial ambition — before Europe’s best ideas and brightest minds head elsewhere. Kieran Hill, General Partner at 20VC — a venture capital firm founded by podcast host Harry Stebbings — urged the continent’s institutions to expand their appetite for risk. “We need to change how we sell ambition,” he said. Hill believes changing this mindset is key to producing Europe’s next business leaders. He warned that today’s talented founders often want…This story continues at The Next Web
Source:: The Next Web
By Siôn Geschwindt A saliva-based fertility tracker has received regulatory approval for use as a contraceptive in Europe. Developed by Berlin-based startup Inne, the at-home testing device — called the “Minilab” — tracks daily changes in progesterone levels, a hormone that plays a key role in regulating the menstrual cycle. It’s billed as a non-invasive alternative to hormonal birth control methods like the pill — and early tests have shown promising results. A year-long clinical study involving 300 women over 1,500 cycles found Inne’s at-home fertility tracker to be 100% effective when used perfectly, and 92% effective for typical use. That’s similar to the…This story continues at The Next Web
Source:: The Next Web
OpenAI has ended its long-standing partnership with Scale AI, the company that powered some of the most complex data-labeling tasks behind frontier models such as GPT-4.
The split, confirmed by an OpenAI spokesperson to Bloomberg, comes on the heels of Meta’s $14.3 billion investment for a 49% stake in Scale, a move that industry analysts warn could redraw battle lines in the AI arms race.
It also secured Scale founder Alexandr Wang to lead Meta’s AI division, accelerating what Deepika Giri, AVP for BDA & AI Research, IDC Asia/Pacific described as a profound challenge to data neutrality in foundational AI layers. “The world is shifting toward vendor-neutral ecosystems,” Giri cautioned, where data security and open platforms are paramount. But with hyperscalers now commanding the core pipelines, that neutrality faces unprecedented pressure.
The high stakes of AI data and talent wars
Meta’s $29 billion valuation of Scale highlights its two-front war for both data infrastructure and elite talent. While the investment aims to shore up Llama 4’s competitiveness, the social giant is also offering unprecedented “seven-to-nine-figure” packages to lure top employees, including OpenAI staff reportedly targeted with $100 million offers, as CEO Sam Altman disclosed on the Uncapped podcast. Yet not all are swayed. A Menlo Ventures VC posted on X that many still choose OpenAI or Anthropic.
The fallout from OpenAI’s exit and Meta’s investment is poised to disrupt the data-labeling industry, projected to reach $29.2 billion by 2032. Jason Droege, Interim CEO, Scale, in a blog post, maintained that its data governance remains independent, stating, “nothing has changed about our commitment to protecting customer data.”
Those reassurances may already be falling short. OpenAI, Bloomberg reported, had already been quietly scaling back its use of Scale’s services for months, citing a need for more specialized data.
OpenAI’s exit redraws the AI data landscape
Scale, which began as a data-labeling pioneer built on a global contractor base in countries like India and Venezuela, reported $870 million in revenue for 2024. But with major clients like Google, which spent $150 million last year, its future is uncertain.
The CEO of Handshake, a Scale competitor, told Time that demand for his company’s services “tripled overnight” in the wake of the Meta deal. The exodus reflects a fear among Meta’s rivals that proprietary data and research roadmaps could leak to a competitor through Scale’s services.
This realignment also exposed blind spots in enterprise AI contracts. Most lack robust “change-of-control” clauses or vendor conflict safeguards, leaving companies exposed when partners align with rivals. As Ipsita Chakrabarty, an analyst at QKS Group, noted, many contracts still rely on static accuracy metrics that crumble against real-world data drift. The result, she warned, is that companies may end up “outsourcing intelligence but retaining liability for failures.”
Yet Scale’s value remains in its elite trainer network (historians, scientists, PhDs) handling specialized tasks costing reportedly “tens to hundreds of dollars” per unit. While Meta’s non-voting stake avoided automatic antitrust review, regulators may still investigate the blurred line between influence and control. For now, the full implications will take months to unfold, as regulatory reviews, vendor transitions, and internal audits continue to reshape the AI data supply chain.
The new realities of AI development
As companies such as Google rush to build in-house data labeling capabilities, the industry faces a choice to repeat the mistakes of the cloud consolidation era of 2010-2015 or take a more open route.
“We’re seeing history repeat itself,” observed Anushree Verma, senior director analyst at Gartner. “The AI race is causing vendor fragmentation now, but consolidation is inevitable.” The parallels are striking. Like cloud providers before them, AI giants are pushing vertical integration that risks locking enterprises into monolithic systems. She urged CIOs to prioritize “agile, interoperable solutions” as safeguards against monolithic systems.
This resonates with IDC’s suggestion for “vendor-neutral ecosystems where data security, regulatory compliance, and open platforms take center stage,” a philosophy now clashing with the industry’s walled-garden reality.
For CIOs, this moment demands more than procurement checklists. Successful AI adoption requires baking in “change management, decision traceability, and human-AI interaction design” from day one, QKS’ Chakrabarty.
The challenge now goes beyond compliance. It requires stress-testing AI ecosystems with the same urgency as applied to cloud and chip vulnerabilities. “The best approach,” according to IDC’s Giri, “is to evaluate capabilities independently and avoid deep integration across the stack, because a monolithic system may lack the flexibility to keep up with tomorrow’s needs.”
Source:: Computer World
Microsoft is set to cut thousands of jobs, mainly in sales, amid growing fears that AI advances are accelerating the replacement of human roles across the industry, Bloomberg reports.
The cuts follow a previous round in May, which saw approximately 6,000 roles eliminated.
Microsoft has been ramping up its AI investments to strengthen its position as enterprises across industries rush to integrate the technology into their operations.
Earlier this year, the company announced plans to spend around $80 billion in fiscal 2025, largely on building data centers to support AI training and cloud-based applications.
Adding to industry unease, Amazon CEO Andy Jassy said this week that generative AI and AI agents are expected to shrink the company’s corporate workforce over time.
AI or other factors?
AI is being used as an excuse for layoffs this year, but there may be more to it than what meets the eye.
“One, we are still rebalancing employee counts from the over-hiring of the past decade,” said Hyoun Park, CEO and chief analyst of Amalgam Insights. “Tech companies were hiring with the assumption that they would grow at ridiculous rates that have not come to pass. Also, some tech companies think they can simply get rid of salespeople, especially in cash-cow industries where renewals seem to come in with little to no effort. Whether that is actually true or not, we are about to find out.”The job cuts may also signal concerns about the near-term revenue potential of AI, Park said. While Microsoft is under pressure to invest heavily in AI to sustain its stock valuation, it may be turning to short-term operating expense reductions to support its financial performance.
“The planned $80 billion investment in AI infrastructure is especially interesting because those numbers assume a massive number of people will adopt Microsoft-related AI products,” Park said. “Are 50 million+ people willing to pay an additional amount on Microsoft products to support AI? That is a massive bet that has been completely unjustified by the current AI market today.”Others point out that the company is betting on a long-term inflection in enterprise workload patterns driven by GenAI, but current adoption patterns remain volatile.
“Reports of Microsoft pausing or renegotiating data center leases reflect a prudent but necessary response to these uncertainties,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “If workloads fail to scale or regulatory barriers increase, Microsoft, and by extension, other hyperscalers, could face underutilized infrastructure, prompting pricing recalibrations or service tier stratification.”
Changing sales environment
The focus on sales roles in the planned cuts is notable, with analysts saying it reflects a broader shift in how enterprise sales functions are evolving.
“The rise of AI copilots, telemetry-rich self-service portals, and data-driven journey mapping is reducing the need for large in-region sales teams,” Gogia said. “Microsoft’s realignment is part of a broader pattern also visible in Amazon, Google, and Salesforce.” However, while AI can personalize interactions at scale, it lacks the relational depth required in strategic deal-making, compliance negotiation, and multi-stakeholder orchestration, Gogia added.
Source:: Computer World
By Deepti Pathak Robotics has become a logistics game-changer, where speed and accuracy are paramount. Figure AI’s recent innovations…
The post Helix’s AI Humanoid Robots Are Reshaping Package Sorting appeared first on Fossbytes.
Source:: Fossbytes
By Hisan Kidwai Garena Free Fire Max is one of the most popular games on the planet, and for…
The post Garena Free Fire Max Redeem Codes for June 19 appeared first on Fossbytes.
Source:: Fossbytes
After reports suggested Meta has tried to poach employees from OpenAI and Google Deepmind by offering huge compensation packages, OpenAI CEO Sam Altman weighed in, saying those reports are true. He confirmed them during a podcast with his brother Jack Altman.
“There have been huge offers to a lot of our team,” said Sam Altman, “like $100 million in sign-on bonuses, more than that in annual compensation.”
According to Altman, the recruitment attempts have largely failed. “I’m really glad that, at least so far, none of our best people have chosen to take it.
Sam Altman says he thinks it’s because employees have decided that OpenAI has a better chance of achieving artificial general intelligence, AGI, than Meta. It could also be because they believe that OpenAI could one day be a higher-valued company than Meta.
Source:: Computer World
Every Mac, iPhone, or iPad user should do everything they can to protect themselves against social engineering-based phishing attacks, a new report from Jamf warns. In a time of deep international tension, the digital threat environment reflects the zeitgeist, with hackers and attackers seeking out security weaknesses on a scale that continues to grow.
Based on extensive research, the latest edition of Jamf’s annual Security 360 report looks at security trends on Apple’s mobile devices and on Macs. It notes that we’ve seen more than 500 CVE security warnings on macOS 15 since its launch, and more than 10 million phishing attacks in the last year. The report should be on the reading list of anyone concerned with managing Apple’s products at scale (or even at home).
Security begins at home
With phishing and social engineering, protecting personal devices is as important as protecting your business machines. According to Jamf, more than 90% of cyberattacks originate from social engineering attacks, many of which begin by targeting people where they live. Not only that, but up to 2% of all the 10 million phishing attacks the company identified are also classified as zero-day attacks — which means attacks are becoming dangerously sophisticated.
This has become such a pervasive problem that Apple in 2024 actually published a support document explaining what you should look for to avoid social engineering attacks. Attackers are increasingly creative, pose as trusted entities, and will use a combination of personal information and AI to create convincing attacks. They recognize, after all, that it is not the attack you spot that gets you, it’s the one you miss.
Within this environment, it is important to note that 25% of organizations have been affected by a social engineering attack — even as 55% of mobile devices used at work run a vulnerable operating system and 32% of organizations still have at least one device with critical vulnerabilities in use across their stack. (The latter is a slight improvement on last year, but not much.)
The nature of what attackers want also seems to be changing. Jamf noticed that attempts to steal information are surging, accounting for 28% of all Mac malware, which suggests some degree of the surveillance taking place. These info-stealing attacks are replacing trojans as the biggest threat to Mac security. The environment is similar on iPhones and iPads, all of which are seeing a similar spike in exploit attempts, zero-day attacks, and convincing social-engineering-driven moves to weaponize digital trust.
The bottom line? While Apple’s platforms are secure by design, the applications you run or the people you interact with remain the biggest security weaknesses the platform has. Security on any platform is only as strong as the weakest link in the chain, even while attack attempts increase and become more convincing and complex.
Defense is the best form of defense
Arnold Schwarzenegger allegedly believes that one should not complain about a situation unless you are prepared to try to do something to make it better. “If you see a problem and you don’t come to the table with apotential solution, I don’t want to hear your whining about how bad it is,” he says.
With that in mind, what can you as a reader do today to help address the growing scourge of Apple-focused malware? Here are some suggestions from Jamf:
Update devices to the latest software.
Protect devices with a passcode.
Use two-factor authentication and strong passwords to protect Apple accounts.
Install apps only from the App Store.
Use strong and unique passwords online.
Don’t click on links or attachments from unknown senders.
And, of course, don’t use older, unprotected operating systems or devices — certainly not when handling critical or confidential data.
Layer up, winter is coming
Organizations can build on these personal protections, of course. Apple devices need Apple-specific security solutions, including endpoint management solutions; enterprises should adopt device management; and they should prepare for the inevitable attacks by fostering a positive, blame-free culture for incident reporting and by eliminating inter-departmental siloes. Investment in staff training is important, too.
It is also important to understand that in a hybrid, multi-platform, ultra mobile world there is no such thing as strict perimeter security anymore. That’s why it is essential to secure endpoints and implement zero-trust. It’s also why it is important to adopt a new posture toward security — there is no single form of effective security protection. At best, your business security relies on layers of protection that together form an effective and flexible security defense.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
Source:: Computer World
By Siôn Geschwindt Without water, the average human would die after about five days. Without energy, our society as we know it would collapse. But what about a world without AI? According to British business leaders, the consequences would be equally catastrophic. A new report by London-based software firm Endava, surveying 500 entrepreneurs, found that two-thirds of respondents rank AI as socially vital — on par with water and electricity. A whopping 93% of the respondents want industry and government to implement AI as fast as possible. Meanwhile, 84% of say they use AI as a “companion” or conversation partner at least once…This story continues at The Next Web
Source:: The Next Web
By Siôn Geschwindt A Swedish startup is taking defence tech back to basics — by building the country’s first TNT factory since the Cold War. Stockholm-based Swebal has secured a €3mn investment for the plant, slated to enter full operation in late 2027. Located in Nora, a town about three hours from the capital, the factory is expected to produce more than 4,000 tonnes of TNT a year. Investors in the facility include the co-founder of venture capital firm EQT, Thomas von Koch, serial entrepreneur Pär Svärdson, and Sweden’s former army chief, Major General Karl Engelbrektson. Joakim Sjöblom, Swebal’s founder, said the…This story continues at The Next Web
Source:: The Next Web
By Partner Content Did you know that anyone can learn digital art now? With a complete pack of realistic…
The post Drawing Made Easy: Learn How to Draw with Drawing Desk appeared first on Fossbytes.
Source:: Fossbytes
By Adarsh Verma Social media is changing at an incredible rate, which makes the journey of an influencer as…
The post Beginner’s Guide on Influencer Journey in 2025 appeared first on Fossbytes.
Source:: Fossbytes
By Siôn Geschwindt Munich-based defence tech startup Helsing has raised €600mn as geopolitical tensions trigger a flood of capital into AI warfare. The large investment was led by Spotify CEO Daniel Ek’s VC firm Prima Materia. It brings the company’s total raised to north of €1.3bn, building on a €450mn funding round in July last year. Helsing didn’t disclose its updated valuation. However, according to the Financial Times, the unicorn company is now worth €12bn, making it one of Europe’s five most valuable private tech companies. Prima Materia was one of Helsing’s earliest backers — a move that sparked boycotts among artists on…This story continues at The Next Web
Source:: The Next Web
Look, it’s not just about Siri and ChatGPT; artificial intelligence will drive future tech experiences and should be seen as a utility. That’s the strategic imperative driving Apple’s WWDC introduction of the Foundation Models Framework for its operating systems. It represents a series of tools that will let developers exploit Apple’s own on-device AI large language models (LLMs) in their apps. This was one of a host of developer-focused improvements the company talked about last week.
The idea is that developers will be able to use the models with as little as three lines of code. So, if you want to build a universal CMS editor for iPad, you can add Writing Tools and translation services to your app to help writers generate better copy for use across an international network of language sites.
Better yet, when you build that app, or any other app, Apple won’t charge you for access to its core Apple Intelligence models – which themselves operate on the device. That’s great, as it means developers for no charge can deliver what will over time become an extensive suite of AI features within their apps while also securing user privacy.
What are Foundation Models?
In a note on its developer website, Apple tells us the models it made available in Foundational Models Framework are particularly good at text-generation tasks such as summarization, “entity extraction,” text understanding, refinement, dialogue for games, creative content generation, and more.
You get:
Apple Intelligence tools as a service for use in apps.
Privacy, as all data stays on the device.
The ability to work offline because processing takes place on the device.
Small apps, since the LLM is built into the OS.
Apple has also made solid decisions in the manner in which it has built Foundational Models. Guided Generation, for example, works to ensure the LLM provides consistently structured responses for use within the apps you build, rather than the messy code many LLMs generate; Apple’s framework is also able to provide complex responses in a more usable format.
Finally, Apple said it is possible to give the Apple Intelligence LLM access to tools other than your own. Dev magazine explains that “tool calling” means you can instruct the LLM when it needs to work with an external tool to bring in information, such as up-to-the-minute weather reporting. That can also extend to actions, such as booking trips.
This kind of access to real information helps keep the LLM sober, preventing it from using fake data to resolve its task. Finally, the company has also figured out how to make apps remember the AI conversations, which means you can engage in inclusive sessions of requests, rather than single-use requests. To stimulate development using Foundation Models, Apple has built in support for doing so inside Xcode Playgrounds.
Walking toward the horizon
Unless you’ve spent the last 12 months locked away from all communications on some form of religious retreat to promote world peace (in which case, I think you should have prayed harder), you’ll know Apple Intelligence has its critics. Most of that criticism is based on the idea that Apple Intelligence needs to be a smart chatbot like ChatGPT (and it isn’t at all unfair to castigate Siri for being a shadow of what it was intended to be).
But that focus on Siri skips the more substantial value released when using LLMs for specific tasks, such as those Writing Tools I mentioned. Yes, Siri sucks a little (but will improve) and Apple Intelligence development has been an embarrassment to the company. But that doesn’t mean everything about Apple’s AI is poor, nor does it mean it won’t get better over time.
What Apple understands is that by making those AI models accessible to developers and third-party apps, it is empowering those who can’t afford fee-based LLMs to get creative with AI. That’s quite a big deal, one that could be considered an “iPhone moment,” or at least an “App Store moment,” in its own right, and it should enable a lot of experimentation.
“We think this will ignite a whole new wave of intelligent experiences in the apps users rely on every day,” Craig Federighi, Apple senior vice president for software engineering, said at WWDC. “We can’t wait to see what developers create.”
What we need
We need that experimentation. For good or ill, we know AI is going to be everywhere, and whether you are comfortable with that truth is less important than figuring out how to best position yourself to be resilient to that reality.
Enabling developers to build AI inside their apps easily and at no cost means they will be able to experiment, and hopefully forge their own path. It also means Apple has dramatically lowered the barrier to entry for AI development on its platforms, even while it is urgently engaged in expanding what AI models it provides within Apple Intelligence. As it introduces new foundation models, developers will be able to use them, empowering more experimenting.
With the cost to privacy and cost of entry set to zero, Foundation Models change the argument around AI on Apple’s platforms. It’s not just about a smarter Siri, it is about a smarter ecosystem — one that Apple hopes developers will help it build, one AI-enabled app at a time.
The Foundation Models Framework is available for beta testing by developers already with public betas to ship with the operating systems in July.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
Source:: Computer World
Microsoft experienced a significant service disruption across its Microsoft 365 services on Monday, affecting core applications including Microsoft Teams and Exchange Online. The outage left users globally unable to access collaboration and communication tools critical to consumers as well as enterprise workflows.
In a series of updates posted on X through the official account of Microsoft 365 Status, Microsoft acknowledged the incident and confirmed that it was actively investigating user reports of service impact. The incident was tracked under the identifier MO1096211 in the Microsoft 365 Admin Center.
Minutes after initial acknowledgement, Microsoft initiated mitigation steps and reported that all services were in the process of recovering. “We’ve confirmed that all services are recovering following our mitigation actions. We’re continuing to monitor recovery,” the company said in an update.
Roughly an hour later, Microsoft posted another update, saying, “Our telemetry indicates that all of our services have recovered and that the impact is resolved.”
“The Microsoft outage that disrupted Teams, Exchange Online, and related services was ultimately caused by an overly aggressive traffic management update that unintentionally rerouted and choked legitimate service traffic. According to Microsoft’s official post-incident report, the faulty code was rolled back swiftly, but not before triggering global access failures, authentication timeouts, and mass user logouts,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research.
Microsoft did not immediately respond to a request for comment.
Not an isolated incident
This incident adds to a growing number of high-profile cloud service disruptions across the industry, raising questions about the resilience of hyperscale infrastructure and the impact on cloud-dependent enterprises. In the last 30 days, IBM Cloud services were disrupted thrice, and a Google Cloud outage just last week impacted over 50 services globally for over seven hours.
Microsoft, in particular, has experienced a steady stream of service disruptions in recent months, exposing persistent fault lines in its cloud infrastructure.
In March this year, the outage disrupted Outlook, Teams, Excel, and more, impacting over 37,000 users. In May, Outlook suffered another outage, which was attributed to a change that caused the problem.
According to Gogia, this sustained pattern reveals architectural brittleness in Microsoft’s control-plane infrastructure — especially in identity, traffic orchestration, and rollback governance — and reinforces the urgent need for structural mitigation.
Costly outages call for contingency planning
Given the complexity and global scale of hyperscale cloud infrastructures, outages remain an ongoing risk for leading SaaS platforms, including Microsoft 365. More so for enterprises that operate in hybrid and remote work environments, threatening business continuity.
Such outages can lead to loss of productivity and disrupted communications, depending on the applications they affect as well as the extent of the outage. This could mean a loss of thousands of dollars to potentially millions of dollars for some, explained Neil Shah, vice president of research, Counterpoint.
Manish Rawat, analyst, TechInsights, said industry estimates suggest that IT downtime can cost mid- to large-sized enterprises between $100,000 and $500,000 per hour, depending on their sector and the criticality of operations. “For large organizations, even a brief 2–3 hour outage could result in millions in lost productivity, reputational harm, and serious operational setbacks, especially in high-stakes sectors like finance, healthcare, and manufacturing,” he said.
Given the recent incidents involving Microsoft 365 services alone, experts believe that enterprises must reduce their overdependence on Microsoft 365. “Organizations should adopt robust contingency plans that include alternative communication tools, offline access to critical documents, and a comprehensive incident response framework,” said Prabhu Ram, VP for industry research group at CMR.
Source:: Computer World
By Hisan Kidwai Inspired by the super popular Tokyo Ghoul anime and manga series, Ghoul RE is a hardcore…
The post Ghoul RE Codes (June 2025) appeared first on Fossbytes.
Source:: Fossbytes
Click Here to View the Upcoming Event Calendar