After reports suggested Meta has tried to poach employees from OpenAI and Google Deepmind by offering huge compensation packages, OpenAI CEO Sam Altman weighed in, saying those reports are true. He confirmed them during a podcast with his brother Jack Altman.
“There have been huge offers to a lot of our team,” said Sam Altman, “like $100 million in sign-on bonuses, more than that in annual compensation.”
According to Altman, the recruitment attempts have largely failed. “I’m really glad that, at least so far, none of our best people have chosen to take it.
Sam Altman says he thinks it’s because employees have decided that OpenAI has a better chance of achieving artificial general intelligence, AGI, than Meta. It could also be because they believe that OpenAI could one day be a higher-valued company than Meta.
Source:: Computer World
Every Mac, iPhone, or iPad user should do everything they can to protect themselves against social engineering-based phishing attacks, a new report from Jamf warns. In a time of deep international tension, the digital threat environment reflects the zeitgeist, with hackers and attackers seeking out security weaknesses on a scale that continues to grow.
Based on extensive research, the latest edition of Jamf’s annual Security 360 report looks at security trends on Apple’s mobile devices and on Macs. It notes that we’ve seen more than 500 CVE security warnings on macOS 15 since its launch, and more than 10 million phishing attacks in the last year. The report should be on the reading list of anyone concerned with managing Apple’s products at scale (or even at home).
Security begins at home
With phishing and social engineering, protecting personal devices is as important as protecting your business machines. According to Jamf, more than 90% of cyberattacks originate from social engineering attacks, many of which begin by targeting people where they live. Not only that, but up to 2% of all the 10 million phishing attacks the company identified are also classified as zero-day attacks — which means attacks are becoming dangerously sophisticated.
This has become such a pervasive problem that Apple in 2024 actually published a support document explaining what you should look for to avoid social engineering attacks. Attackers are increasingly creative, pose as trusted entities, and will use a combination of personal information and AI to create convincing attacks. They recognize, after all, that it is not the attack you spot that gets you, it’s the one you miss.
Within this environment, it is important to note that 25% of organizations have been affected by a social engineering attack — even as 55% of mobile devices used at work run a vulnerable operating system and 32% of organizations still have at least one device with critical vulnerabilities in use across their stack. (The latter is a slight improvement on last year, but not much.)
The nature of what attackers want also seems to be changing. Jamf noticed that attempts to steal information are surging, accounting for 28% of all Mac malware, which suggests some degree of the surveillance taking place. These info-stealing attacks are replacing trojans as the biggest threat to Mac security. The environment is similar on iPhones and iPads, all of which are seeing a similar spike in exploit attempts, zero-day attacks, and convincing social-engineering-driven moves to weaponize digital trust.
The bottom line? While Apple’s platforms are secure by design, the applications you run or the people you interact with remain the biggest security weaknesses the platform has. Security on any platform is only as strong as the weakest link in the chain, even while attack attempts increase and become more convincing and complex.
Defense is the best form of defense
Arnold Schwarzenegger allegedly believes that one should not complain about a situation unless you are prepared to try to do something to make it better. “If you see a problem and you don’t come to the table with apotential solution, I don’t want to hear your whining about how bad it is,” he says.
With that in mind, what can you as a reader do today to help address the growing scourge of Apple-focused malware? Here are some suggestions from Jamf:
Update devices to the latest software.
Protect devices with a passcode.
Use two-factor authentication and strong passwords to protect Apple accounts.
Install apps only from the App Store.
Use strong and unique passwords online.
Don’t click on links or attachments from unknown senders.
And, of course, don’t use older, unprotected operating systems or devices — certainly not when handling critical or confidential data.
Layer up, winter is coming
Organizations can build on these personal protections, of course. Apple devices need Apple-specific security solutions, including endpoint management solutions; enterprises should adopt device management; and they should prepare for the inevitable attacks by fostering a positive, blame-free culture for incident reporting and by eliminating inter-departmental siloes. Investment in staff training is important, too.
It is also important to understand that in a hybrid, multi-platform, ultra mobile world there is no such thing as strict perimeter security anymore. That’s why it is essential to secure endpoints and implement zero-trust. It’s also why it is important to adopt a new posture toward security — there is no single form of effective security protection. At best, your business security relies on layers of protection that together form an effective and flexible security defense.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
Source:: Computer World
By Siôn Geschwindt Without water, the average human would die after about five days. Without energy, our society as we know it would collapse. But what about a world without AI? According to British business leaders, the consequences would be equally catastrophic. A new report by London-based software firm Endava, surveying 500 entrepreneurs, found that two-thirds of respondents rank AI as socially vital — on par with water and electricity. A whopping 93% of the respondents want industry and government to implement AI as fast as possible. Meanwhile, 84% of say they use AI as a “companion” or conversation partner at least once…This story continues at The Next Web
Source:: The Next Web
By Siôn Geschwindt A Swedish startup is taking defence tech back to basics — by building the country’s first TNT factory since the Cold War. Stockholm-based Swebal has secured a €3mn investment for the plant, slated to enter full operation in late 2027. Located in Nora, a town about three hours from the capital, the factory is expected to produce more than 4,000 tonnes of TNT a year. Investors in the facility include the co-founder of venture capital firm EQT, Thomas von Koch, serial entrepreneur Pär Svärdson, and Sweden’s former army chief, Major General Karl Engelbrektson. Joakim Sjöblom, Swebal’s founder, said the…This story continues at The Next Web
Source:: The Next Web
By Partner Content Did you know that anyone can learn digital art now? With a complete pack of realistic…
The post Drawing Made Easy: Learn How to Draw with Drawing Desk appeared first on Fossbytes.
Source:: Fossbytes
By Adarsh Verma Social media is changing at an incredible rate, which makes the journey of an influencer as…
The post Beginner’s Guide on Influencer Journey in 2025 appeared first on Fossbytes.
Source:: Fossbytes
By Siôn Geschwindt Munich-based defence tech startup Helsing has raised €600mn as geopolitical tensions trigger a flood of capital into AI warfare. The large investment was led by Spotify CEO Daniel Ek’s VC firm Prima Materia. It brings the company’s total raised to north of €1.3bn, building on a €450mn funding round in July last year. Helsing didn’t disclose its updated valuation. However, according to the Financial Times, the unicorn company is now worth €12bn, making it one of Europe’s five most valuable private tech companies. Prima Materia was one of Helsing’s earliest backers — a move that sparked boycotts among artists on…This story continues at The Next Web
Source:: The Next Web
Look, it’s not just about Siri and ChatGPT; artificial intelligence will drive future tech experiences and should be seen as a utility. That’s the strategic imperative driving Apple’s WWDC introduction of the Foundation Models Framework for its operating systems. It represents a series of tools that will let developers exploit Apple’s own on-device AI large language models (LLMs) in their apps. This was one of a host of developer-focused improvements the company talked about last week.
The idea is that developers will be able to use the models with as little as three lines of code. So, if you want to build a universal CMS editor for iPad, you can add Writing Tools and translation services to your app to help writers generate better copy for use across an international network of language sites.
Better yet, when you build that app, or any other app, Apple won’t charge you for access to its core Apple Intelligence models – which themselves operate on the device. That’s great, as it means developers for no charge can deliver what will over time become an extensive suite of AI features within their apps while also securing user privacy.
What are Foundation Models?
In a note on its developer website, Apple tells us the models it made available in Foundational Models Framework are particularly good at text-generation tasks such as summarization, “entity extraction,” text understanding, refinement, dialogue for games, creative content generation, and more.
You get:
Apple Intelligence tools as a service for use in apps.
Privacy, as all data stays on the device.
The ability to work offline because processing takes place on the device.
Small apps, since the LLM is built into the OS.
Apple has also made solid decisions in the manner in which it has built Foundational Models. Guided Generation, for example, works to ensure the LLM provides consistently structured responses for use within the apps you build, rather than the messy code many LLMs generate; Apple’s framework is also able to provide complex responses in a more usable format.
Finally, Apple said it is possible to give the Apple Intelligence LLM access to tools other than your own. Dev magazine explains that “tool calling” means you can instruct the LLM when it needs to work with an external tool to bring in information, such as up-to-the-minute weather reporting. That can also extend to actions, such as booking trips.
This kind of access to real information helps keep the LLM sober, preventing it from using fake data to resolve its task. Finally, the company has also figured out how to make apps remember the AI conversations, which means you can engage in inclusive sessions of requests, rather than single-use requests. To stimulate development using Foundation Models, Apple has built in support for doing so inside Xcode Playgrounds.
Walking toward the horizon
Unless you’ve spent the last 12 months locked away from all communications on some form of religious retreat to promote world peace (in which case, I think you should have prayed harder), you’ll know Apple Intelligence has its critics. Most of that criticism is based on the idea that Apple Intelligence needs to be a smart chatbot like ChatGPT (and it isn’t at all unfair to castigate Siri for being a shadow of what it was intended to be).
But that focus on Siri skips the more substantial value released when using LLMs for specific tasks, such as those Writing Tools I mentioned. Yes, Siri sucks a little (but will improve) and Apple Intelligence development has been an embarrassment to the company. But that doesn’t mean everything about Apple’s AI is poor, nor does it mean it won’t get better over time.
What Apple understands is that by making those AI models accessible to developers and third-party apps, it is empowering those who can’t afford fee-based LLMs to get creative with AI. That’s quite a big deal, one that could be considered an “iPhone moment,” or at least an “App Store moment,” in its own right, and it should enable a lot of experimentation.
“We think this will ignite a whole new wave of intelligent experiences in the apps users rely on every day,” Craig Federighi, Apple senior vice president for software engineering, said at WWDC. “We can’t wait to see what developers create.”
What we need
We need that experimentation. For good or ill, we know AI is going to be everywhere, and whether you are comfortable with that truth is less important than figuring out how to best position yourself to be resilient to that reality.
Enabling developers to build AI inside their apps easily and at no cost means they will be able to experiment, and hopefully forge their own path. It also means Apple has dramatically lowered the barrier to entry for AI development on its platforms, even while it is urgently engaged in expanding what AI models it provides within Apple Intelligence. As it introduces new foundation models, developers will be able to use them, empowering more experimenting.
With the cost to privacy and cost of entry set to zero, Foundation Models change the argument around AI on Apple’s platforms. It’s not just about a smarter Siri, it is about a smarter ecosystem — one that Apple hopes developers will help it build, one AI-enabled app at a time.
The Foundation Models Framework is available for beta testing by developers already with public betas to ship with the operating systems in July.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
Source:: Computer World
Microsoft experienced a significant service disruption across its Microsoft 365 services on Monday, affecting core applications including Microsoft Teams and Exchange Online. The outage left users globally unable to access collaboration and communication tools critical to consumers as well as enterprise workflows.
In a series of updates posted on X through the official account of Microsoft 365 Status, Microsoft acknowledged the incident and confirmed that it was actively investigating user reports of service impact. The incident was tracked under the identifier MO1096211 in the Microsoft 365 Admin Center.
Minutes after initial acknowledgement, Microsoft initiated mitigation steps and reported that all services were in the process of recovering. “We’ve confirmed that all services are recovering following our mitigation actions. We’re continuing to monitor recovery,” the company said in an update.
Roughly an hour later, Microsoft posted another update, saying, “Our telemetry indicates that all of our services have recovered and that the impact is resolved.”
“The Microsoft outage that disrupted Teams, Exchange Online, and related services was ultimately caused by an overly aggressive traffic management update that unintentionally rerouted and choked legitimate service traffic. According to Microsoft’s official post-incident report, the faulty code was rolled back swiftly, but not before triggering global access failures, authentication timeouts, and mass user logouts,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research.
Microsoft did not immediately respond to a request for comment.
Not an isolated incident
This incident adds to a growing number of high-profile cloud service disruptions across the industry, raising questions about the resilience of hyperscale infrastructure and the impact on cloud-dependent enterprises. In the last 30 days, IBM Cloud services were disrupted thrice, and a Google Cloud outage just last week impacted over 50 services globally for over seven hours.
Microsoft, in particular, has experienced a steady stream of service disruptions in recent months, exposing persistent fault lines in its cloud infrastructure.
In March this year, the outage disrupted Outlook, Teams, Excel, and more, impacting over 37,000 users. In May, Outlook suffered another outage, which was attributed to a change that caused the problem.
According to Gogia, this sustained pattern reveals architectural brittleness in Microsoft’s control-plane infrastructure — especially in identity, traffic orchestration, and rollback governance — and reinforces the urgent need for structural mitigation.
Costly outages call for contingency planning
Given the complexity and global scale of hyperscale cloud infrastructures, outages remain an ongoing risk for leading SaaS platforms, including Microsoft 365. More so for enterprises that operate in hybrid and remote work environments, threatening business continuity.
Such outages can lead to loss of productivity and disrupted communications, depending on the applications they affect as well as the extent of the outage. This could mean a loss of thousands of dollars to potentially millions of dollars for some, explained Neil Shah, vice president of research, Counterpoint.
Manish Rawat, analyst, TechInsights, said industry estimates suggest that IT downtime can cost mid- to large-sized enterprises between $100,000 and $500,000 per hour, depending on their sector and the criticality of operations. “For large organizations, even a brief 2–3 hour outage could result in millions in lost productivity, reputational harm, and serious operational setbacks, especially in high-stakes sectors like finance, healthcare, and manufacturing,” he said.
Given the recent incidents involving Microsoft 365 services alone, experts believe that enterprises must reduce their overdependence on Microsoft 365. “Organizations should adopt robust contingency plans that include alternative communication tools, offline access to critical documents, and a comprehensive incident response framework,” said Prabhu Ram, VP for industry research group at CMR.
Source:: Computer World
By Hisan Kidwai Inspired by the super popular Tokyo Ghoul anime and manga series, Ghoul RE is a hardcore…
The post Ghoul RE Codes (June 2025) appeared first on Fossbytes.
Source:: Fossbytes
By Deepti Pathak Ghoul Re is an exciting Roblox game based on the dark universe of ghouls and humans,…
The post Official Ghoul Re Trello & Discord Link (2025) appeared first on Fossbytes.
Source:: Fossbytes
By Siôn Geschwindt The Netherlands’ hottest tech festival is just around the corner, and the buzz is electric. TNW Conference brings together Europe’s sharpest minds, boldest startups, and game-changing tech leaders at Amsterdam’s NDSM on June 19 and 20. It’s two days packed with big ideas, fierce debates, and innovations that could shape the future. We’ve combed through the packed schedule and picked seven sessions you absolutely can’t miss — the ones set to challenge the status quo and get everyone talking. 1.Where’s Iron Man? Why tech belongs in defence — Purple Stage, Thursday 16:25 – 16:55 Capital is pouring into defence tech…This story continues at The Next Web
Source:: The Next Web
Generative AI (genAI) poses a classic IT dilemma. When it works well, it is amazingly versatile and useful, fueling dreams that it can do almost anything.
The problem is that when it does not do well, it might deliver wrong answers, override its instructions, and pretty much reinforce the plotlines of every sci-fi horror movie ever made. That is why I was horrified when OpenAI late last month announced changes to make it much easier to give its genAI models full access to any software using Model Context Protocol (MCP).
“We’re adding support for remote MCP servers in the Responses API, building on the release of MCP support in the Agents SDK,” the company said. “MCP is an open protocol that standardizes how applications provide context to LLMs. By supporting MCP servers in the Responses API, developers will be able to connect our models to tools hosted on any MCP server with just a few lines of code.”
There are a large number of companies that have publicly said they will use MCP, including those with popular apps such as PayPal, Stripe, Shopify, Square, Slack, QuickBooks, Salesforce and GoogleDrive.
The ability for a genAI large language model (LLM) to coordinate data and actions with all of those apps — and many more —certainly sounds attractive. But it’s dangerous because it allows access to mountains of highly sensitive compliance-relevant data — and a mistaken move could deeply hurt customers. MCP would also allow genAI tools to control those apps, exponentially increasing risks.
If the technology today cannot yet do its job properly and consistently, what level of hallucinogens are needed to justify expanding its power to other apps?
Christofer Hoff, the CTO and CSO at LastPass, took to LinkedIn to appeal to common sense. (OK, if one wanted to appeal to common sense, LinkedIn is probably not the best place to start, but that’s a different story.)
“I love the enthusiasm,” Hoff wrote. “I think the opportunity for end-to-end workflow automation with a standardized interface is fantastic vs mucking about hardcoding your own. That said, the security Jiminy Cricket occupying my frontal precortex is screaming in terror. The bad guys are absolutely going to love this. Who needs malware when you have MCP? Like TCP/IP, MCP will likely go down as another accidental success. At a recent talk, Anthropic noted that they were very surprised at the uptake. And just like TCP/IP, it suffers from critical deficiencies that will have stuff band-aided atop for years to come.”
Rex Booth, the CISO at identity vendor SailPoint, said the concerns are justified. “If you are connecting your agents to a bunch of highly sensitive data sources, you need to have strong safeguards in place,” he said.
But as Anthropic itself has noted, genAI models do not always obey their own guardrails.
QueryPal CEO Dev Nag sees inevitable data usage problems.
“You have to specify what files [the model] is allowed to look at and what files it is not allowed to look at and you have to be able to specify that,” Nag said. “And we already know that LLMs don’t do that perfectly. LLMs hallucinate, make incorrect textual assumptions.”
Nag argued that the risk is — or at least should be — already well known to IT decision makers. “It’s the same as the API risk,” Nag said. “If you open up your API to an outside vendor with their own code, it could do anything. MCP is just APIs on steroids. I don’t think you’d want AI to be looking at your core financials and be able to change your accounting.”
The best defense is to not trust the guardrails on either side of the communication, but to give the exclusion instructions to both sides. In an example with the model trying to access Google Docs, Nag said, dual instructions are the only viable approach.
“It should be enforced at both sides, with the Google Doc layer being told that it can’t accept any calls from the LLM,” Nag said. “On the LLM side, it should be told ‘OK, my intentions are to show my work documents, but not my financial documents.’”
Bottom line: the concept of MCP interactiveness is a great one. The likely near-term reality? Not so much.
Source:: Computer World
As companies move from testing out generative AI tools and models into real-world use — also known as inference— they’re having trouble predicting what that use will lead to in terms of cloud costs, according to a new report from analyst firm Canalys .
“Unlike training, which is a one-time investment, inference represents a recurring operational cost, making it a crucial constraint on the path to commercializing AI,” said Canalys senior director Rachel Brindley in a statement. “As AI moves from research to large-scale deployment, companies are increasingly focusing on cost-effectiveness in inference, comparing models, cloud platforms, and hardware architectures such as GPUs versus custom accelerators.”
According to Canalys researcher Yi Zhang, many AI services rely on usage-based pricing models that charge per token or API call; that makes it difficult to predict costs when scaling up usage.
“When inference costs are volatile or excessively high, companies are forced to limit usage, reduce model complexity, or restrict implementation to high-value scenarios. As a result, the broader potential of AI remains underutilized,” said Zhang.
Source:: Computer World
By Siôn Geschwindt Two satellites equipped with European tech have delicately pulled off an artificial solar eclipse — giving scientists unmatched views of the Sun’s scorching corona. The European Space Agency (ESA) developed the probes alongside more than 40 space tech firms. Among them are a trio of startups, which contributed several key technologies for the mission: sensors for solar tracking, light detectors to fine-tune positioning, and software that orchestrated the satellites’ intricate flight path. Launched from India’s Satish Dhawan Space Centre last year, the expedition — Proba-3 — could mark a new era for solar science. The Sun’s inner corona, coloured artificially…This story continues at The Next Web
Source:: The Next Web
By Deepti Pathak Swiping endlessly through Instagram Reels is probably everyone’s favorite pastime. And while most Reels aren’t particularly…
The post How To View Your Instagram Reel History: 4 Ways appeared first on Fossbytes.
Source:: Fossbytes
Meta is looking to up its weakening AI game with a key talent grab.
Following days of speculation, the social media giant has confirmed that Scale AI’s founder and CEO, Alexandr Wang, is joining Meta to work on its AI efforts.
Meta will invest $14.3 billion in Scale AI as part of the deal, and will have a 49% stake in the AI startup, which specializes in data labeling and model evaluation services. Other key Scale employees will also move over to Meta, while CSO Jason Droege will step in as Scale’s interim CEO.
This move comes as the Mark Zuckerberg-led company goes all-in on building a new research lab focused on “superintelligence,” the next step beyond artificial general intelligence (AGI).
The arrangement also reflects a growing trend in big tech, where industry giants are buying companies without really buying them — what’s increasingly being referred to as “acqui-hiring.” It involves recruiting key personnel from a company, licensing its technology, and selling its products, but leaving it as a private entity.
“This is fundamentally a massive ‘acqui-hire’ play disguised as a strategic investment,” said Wyatt Mayham, lead AI consultant at Northwest AI Consulting. “While Meta gets Scale’s data infrastructure, the real prize is Wang joining Meta to lead their superintelligence lab. At the $14.3 billion price tag, this might be the most expensive individual talent acquisition in tech history.”
Closing gaps with competitors
Meta has struggled to keep up with OpenAI, Anthropic, and other key competitors in the AI race, recently even delaying the launch of its new flagship model, Behemoth, purportedly due to internal concerns about its performance. It has also seen the departure of several of its top researchers.
“It’s not really a secret at this point that Meta’s Llama 4 models have had significant performance issues,” Mayham said. “Zuck is essentially betting that Wang’s track record building AI infrastructure can solve Meta’s alignment and model quality problems faster than internal development.” And, he added, Scale’s enterprise-grade human feedback loops are exactly what Meta’s Llama models need to compete with ChatGPT and Claude on reliability and task-following.
Data quality, a key focus for Wang, is a big factor in solving those performance problems. He wrote in a note to Scale employees on Thursday, later posted on X (formerly Twitter), that when he founded Scale AI in 2016 amidst some of the early AI breakthroughs, “it was clear even then that data was the lifeblood of AI systems, and that was the inspiration behind starting Scale.”
But despite Meta’s huge investment, Scale AI is underscoring its commitment to sovereignty: “Scale remains an independent leader in AI, committed to providing industry-leading AI solutions and safeguarding customer data,” the company wrote in a blog post. “Scale will continue to partner with leading AI labs, multinational enterprises, and governments to deliver expert data and technology solutions through every phase of AI’s evolution.”
Allowing big tech to side-step notification
But while it’s only just been inked, the high-profile deal is already raising some eyebrows. According to experts, arrangements like these allow tech companies to acquire top talent and key technologies in a side-stepping manner, thus avoiding regulatory notification requirements.
The US Federal Trade Commission (FTC) requires mergers and acquisitions totaling more than $126 million be reported in advance. Licensing deals or the mass hiring-away of a company’s employees don’t have this requirement. This allows companies to move more quickly, as they don’t have to undergo the lengthy federal review process.
Microsoft’s deal with Inflection AI is probably one of the highest-profile examples of the “acqui-hiring” trend. In March 2024, the tech giant paid the startup $650 million in licensing fees and hired much of its team, including co-founders Mustafa Suleyman (now CEO of Microsoft AI) and Karén Simonyan (chief scientist of Microsoft AI).
Similarly, last year Amazon hired more than 50% of Adept AI’s key personnel, including its CEO, to focus on AGI. Google also inked a licensing agreement with Character AI and hired a majority of its founders and researchers.
However, regulators have caught on, with the FTC launching inquiries into both the Microsoft-Inflection and Amazon-Adept deals, and the US Justice Department (DOJ) analyzing Google-Character AI.
Reflecting ‘desperation’ in the AI industry
Meta’s decision to go forward with this arrangement anyway, despite that dicey backdrop, seems to indicate how anxious the company is to keep up in the AI race.
“The most interesting piece of this all is the timing,” said Mayham. “It reflects broader industry desperation. Tech giants are increasingly buying parts of promising AI startups to secure key talent without acquiring full companies, following similar patterns with Microsoft-Inflection and Google-Character AI.”
However, the regulatory risks are “real but nuanced,” he noted. Meta’s acquisition could face scrutiny from antitrust regulators, particularly as the company is involved in an ongoing FTC lawsuit over its Instagram and WhatsApp acquisitions. While the 49% ownership position appears designed to avoid triggering automatic thresholds, US regulatory bodies like the FTC and DOJ can review minority stake acquisitions under the Clayton Antitrust Act if they seem to threaten competition.
Perhaps more importantly, Meta is not considered a leader in AGI development and is trailing OpenAI, Anthropic, and Google, meaning regulators may not consider the deal all that concerning (yet).
All told, the arrangement certainly signals Meta’s recognition that the AI race has shifted from a compute and model size competition to a data quality and alignment battle, Mayham noted.
“I think the [gist] of this is that Zuck’s biggest bet is that talent and data infrastructure matter more than raw compute power in the AI race,” he said. “The regulatory risk is manageable given Meta’s trailing position, but the acqui-hire premium shows how expensive top AI talent has become.”
Source:: Computer World
By Partner Content What works well for one team becomes chaos when scaled to a department or company level—especially…
The post Can you Scale with Kanban? In-depth Review appeared first on Fossbytes.
Source:: Fossbytes
By Bogdan Gogulan A few months ago, at the SmallSat Symposium, a panel issued a sobering warning to space startups: do not chase defence dollars at the expense of long-term sustainability. Why? Because companies, particularly in the space sector, might be tempted to follow the money rather than focus on producing valuable products and services with broader, longer-term applications. It’s of course true that any company should be wary of “leaning in” too closely to what seem like passing fads. But the warning overlooks an important reality: the shift towards defence investing is not a trend. It’s a transformation in space and space…This story continues at The Next Web
Source:: The Next Web
Apple’s Worldwide Developers Conference 2025 was home to a range of announcements that offered a glimpse into the future of Apple’s software design and artificial intelligence (AI) strategy, highlighted by a new design language called Liquid Glass and Apple Intelligence news.
Liquid Glass is designed to add translucency and dynamic movement to Apple’s user interface across iPhones, iPads, Macs, Apple Watches, and Apple TVs. This overhaul aims to make interactions with elements like buttons and sidebars adapt contextually.
However, the real news of WWDC may be what we didn’t see. Analysts had high expectations for Apple’s AI strategy. While Apple Intelligence was announced, many market watchers reported that it lack the innovation of Google’s and Microsoft’s generative AI rollouts.
The question of whether Apple is playing catch-up lingered at WWDC 2025 Comments from Apple about delaying a significant AI overhaul for Siri were reportedly interpreted as a setback by investors, leading to a negative reaction and drop in stock price.
Follow this page for Computerworld’s coverage of WWDC25.
WWDC25 news and analysis
For developers, Apple’s tools get a lot better for AI
June 12, 2025: Apple announced one important AI update at WWDC this week, the introduction of support for third-party large language models (LLM), such as ChatGPT from within Xcode. It’s a big step that should benefit developers, accelerating app development.
WWDC 25: What’s new for Apple and the enterprise?
June 11, 2025: Beyond its new Liquid Glass UI and other major improvements across its operating systems, Apple introduced a hoard of changes, tweaks, and enhancements for IT admins at WWDC 2025.
What we know so far about Apple’s Liquid Glass UI
June 10, 2025: What Apple has tried to achieve with Liquid Glass is to bring together the optical quality of glass and the fluidity of liquid to emphasize transparency and lighting when using your devices.
WWDC first look: How Apple is improving its ecosystem
June 9, 2025: While the new user interface design Apple execs highlighted at this year’s Worldwide Developers Conference (WWDC) might have been a bit of an eye-candy distraction, Apple’s enterprise users were not forgotten.
Apple infuses AI into the Vision Pro
June 8, 2025: Sluggish sales of Apple’s Vision Pro mixed reality headset haven’t dampened the company’s enthusiasm for advancing the device’s 3D computing experience, which now incorporates AI to deliver richer context and experiences.
WWDC: Apple is about to unlock international business
June 4, 2025: One of the more exciting pre-WWDC rumors is that Apple is preparing to make language problems go away by implementing focused artificial intelligence in Messages, which will apparently be able to translate incoming and outgoing messages on the fly.
Source:: Computer World
Click Here to View the Upcoming Event Calendar