By The Conversation The potential of using artificial intelligence in drug discovery and development has sparked both excitement and skepticism among scientists, investors and the general public. “Artificial intelligence is taking over drug development,” claim some companies and researchers. Over the past few years, interest in using AI to design drugs and optimise clinical trials has driven a surge in research and investment. AI-driven platforms like AlphaFold, which won the 2024 Nobel Prize for its ability to predict the structure of proteins and design new ones, showcase AI’s potential to accelerate drug development. AI in drug discovery is “nonsense,” warn some industry veterans.…This story continues at The Next Web
Source:: The Next Web
By Hisan Kidwai Google Drive’s share feature is pretty handy, allowing us to share important documents, files, and videos….
The post How to Create Direct Download Links for Google Drive? appeared first on Fossbytes.
Source:: Fossbytes
2025 began in turmoil, with layoffs at some of the largest tech companies despite the support shown by the new US administration. 2024 had been a year of recovery, with the pace of layoffs slowing and IT employment the highest for years following two years of massive IT layoff in 2022 and 2023.
According to data compiled by Layoffs.fyi, the online tracker keeping tabs on job losses in the technology sector, 1,193 tech companies laid off 264,220 staff in 2023, dropping to “just” 152,104 employees laid off by 547 companies in 2024. In 2025, it has already logged 7,003 staff laid off by 31 companies.
Here is a list — to be updated regularly — of some of the most prominent technology layoffs the industry has experienced recently.
Tech layoffs in 2025
Salesforce
Meta
Feb. 4, 2025: Salesforce lays off over 1,000
At the same time as it’s hiring sales staff for its new artificial intelligence products, Salesforce is laying off over 1,000 workers across the company, according to Bloomberg. As of June, 2024, the company had over 72,000 employees, according to its website. Salesforce did not comment on the report. In 2024 the company reportedly laid off around 1,000 staff too, in two waves: January and July.
Jan. 14, 2025: Meta will lay off 5% of workforce
Mark Zuckerberg told Meta employees he intended to “move out the low performers faster” in an internal memo reported by Bloomberg. The memo announced that the company will lay off 5% of its staff, or around 3,600 staff, beginning Feb. 10. The company had already reduced its headcount by 5% in 2024 through natural attrition, the memo said. Among those leaving the company will be staff previously responsible for fact checking of posts on its social media platforms in the US, as the company begins relying on its users to police content.
Tech layoffs in 2024
Equinix
AMD
Freshworks
Cisco
General Motors
Intel
OpenText
Microsoft
AWS
Dell
Cisco
Nov. 26, 2024: Equinix to cut 3% of staff
Despite intense demand for its data center capacity, Equinix is planning to lay off 3% of its workforce, or around 400 employees. The announcement followed the appointment of Adaire Fox-Martin to replace Charles Meyers as CEO and the departures of two other senior executives, CIO Milind Wagle and CISO Michael Montoya.
Nov. 13, 2024: AMD to cut 4% of workforce
AMD will lay off around 1,000 employees as it pivots towards developing AI-focused chips, it said. The move came as a surprise to staff, as the company also reported strong quarterly earnings.
Nov. 7, 2024: Freshworks lays off 660
Enterprise software vendor Freshworks laid off around 660 staff, or around 13% of its headcount, despite reporting increased revenue and profits in its fourth fiscal quarter. The company described the layoffs as a realignment of its global workforce.
Sept. 17, 2024: Cisco lays off 6,000
After laying off around 4,200 staff in February, Cisco is at it again, laying off another 6,000 or around 7% of its workforce. Among the divisions affected were its threat intelligence unit, Talos Security.
Aug. 20, 2024: General Motors lays off 1,000 software staff
More than 1,000 software and services staff are on the way out at General Motors, signalling that it could be rethinking its digital transformation strategy. In an internal memo, the company said that it was moving resources to its highest-priority work and flattening hierarchies.
August 1, 2024: Intel removes 15,000 roles
Intel plans to cut its workforce by around 15% to reduce costs after a disastrous second quarter. Revenue for the three months to June 29 stagnated at around $12.8 billion, but net income fell 85% to $83 million, prompting CEO Pat Gelsinger to bring forward a company-wide meeting in order to announce that 15,000 staff would lose their jobs. “This is an incredibly hard day for Intel as we are making some of the most consequential changes in our company’s history,” Gelsinger wrote in an email to staff, continuing: “Our revenues have not grown as expected — and we’ve yet to fully benefit from powerful trends, like AI. Our costs are too high, our margins are too low. We need bolder actions to address both — particularly given our financial results and outlook for the second half of 2024, which is tougher than previously expected.”
July 4, 2024: OpenText to lay off 1,200
OpenText said it will lay off 1,200 staff, or about 1.7% of its workforce, in a bid to save around $100 million annually. It plans to hire new sales and engineering staff in other areas in 2025, it said.
June 4, 2024: Microsoft lays off staff in Azure division
Microsoft laid off staff in several teams supporting its cloud services, including Azure for Operations and Mission Engineering. The company didn’t say exactly how many staff were leaving.
April 4, 2024: Amazon downsizes AWS in a fresh cost-cutting round
Amazon announced hundreds of layoffs in the sales and marketing teams of its AWS cloud services division — and also in the technology development teams for its physical retail stores, as it stepped back from efforts to generalize the “Just Walk Out” technology built for its Amazon Fresh grocery stores.
April 1, 2024: Dell acknowledges 13,000 job cuts
Dell Technologies’ latest 10K filing with the US Securities and Exchange Commission disclosed that the company had laid off 13,000 employees over the course of the 2023 fiscal year; it characterized the layoffs and other reorganizational moves as cost-cutting measures. “These actions resulted in a reduction in our overall headcount,” the company said. A comparison to the previous year’s 10K filing, performed by The Register, found that Dell employed 133,000 people at that point, compared to 120,000 as of February 2024. Dell announced layoffs of 6,650 staffers on Feb. 6, but it is unclear whether those cuts were reflected in the numbers from this year’s 10K statement.
Feb. 14, 2024: Cisco cuts 5% of workforce
Cisco will shed 4,200 of its 84,900 employees as it refocuses on more profitable areas of its business, including AI and security. The company’s last major round of layoffs was in November 2022. Cisco’s sales of telecommunications equipment have been hit by delays at telcos in rolling out equipment they havealready purchased. AI, on the other hand, is a growing business for Cisco, with AI-related sales in the billions—and that’s before it announced its recent partnership with Nvidia, which is making bank on sales of chips for AI applications.
See news of earlier layoffs.
Source:: Computer World
According to a new policy document from Meta, the Frontier AI Framework, the company might not release AI systems developed in-house in certain risky scenarios.
The document defines two types of AI systems that can be classified as either “high risk” or “critical risk.” In both cases, these are systems that could help carry out cyber, chemical or biological attacks.
Systems classified as “high risk” might facilitate such an attack, though not to the same extent as a “critical risk” system, which could result in catastrophic outcomes. These could include, for example, taking over a corporate environment or deploying powerful biological weapons.
In the document, Meta states that if a system is “high risk,” the company will restrict internal access to it and will not release it until measures have been taken to reduce the risk to “moderate levels.” If, instead, the system is “critical risk,” security protections will be put in place to prevent it from spreading and development will stop until the system can be made safer.
Source:: Computer World
By Hisan Kidwai Apple’s iOS has several safeguards in place to prevent apps from accessing sensitive functions, such as…
The post How to Allow Camera Access to Instagram on iPhone appeared first on Fossbytes.
Source:: Fossbytes
By Thomas Macaulay The NBA is experimenting with a digital brain for basketballs. The system is the brainchild of SportIQ, a Finnish startup that develops smart basketballs. Inside each ball’s valve, SportIQ embeds a sensor that tracks a player’s shots. Data is first extracted on their form, position, angle, power, and technique. Next, the information is fed to a mobile app for AI analysis. Players then receive direct feedback and advice. According to SportIQ, over 20 million shots have already been tracked. The company estimates that regular users improve their shooting accuracy by 12%. The results impressed bigwigs at the NBA. They revealed…This story continues at The Next Web
Source:: The Next Web
By Hisan Kidwai Cookie Run Kingdom is a popular role-playing game (RPG) and city simulator in which players manage…
The post Latest Cookie Run Kingdom Codes (February 2025) appeared first on Fossbytes.
Source:: Fossbytes
AMD on Monday issued two patches for severe microcode security flaws, defects that AMD said “could lead to the loss of Secure Encrypted Virtualization (SEV) protection.” The bugs were inadvertently revealed by a partner last week.
The most dangerous time for this kind of security hole is right after it is disclosed and before patches are applied. Due to the nature of microcode patches, enterprise users now have to wait for OEMs and other partners to implement the fixes in their hardware-specific microcode.
“This places more burden on hardware vendor OEMs to distribute and install. That may cause a delay in adoption,” said John Price, CEO at Cleveland-based security firm SubRosa. “This could potentially create a gap. The speed of adoption might not be as quick as we would like to see.”
Source:: Computer World
DeepSeek, founded in 2023 by Liang Wenfeng, a Chinese entrepreneur, engineer and former hedge fund manager, is generating a lot of buzz — and for good reason. Here are five things that make it stand out (as well as a listing of the latest news and analysis about DeepSeek).
DeepSeek offers:
More accessibility and efficiency: DeepSeek is designed to be less expensive to train and use than many competing large language models (LLMs). Its architecture allows for high performance with fewer computational resources, which is designed to lead to faster response times and less energy consumption.
Open-source availability and rapid development: DeepSeek is under active development with new models and features being released regularly. Models are often available for public download (on Hugging Face, for instance), which encourages collaboration and customization.
Advanced capabilities: reasoning and multimodal learning: Models like DeepSeek-R1 are designed with a focus on advanced reasoning capabilities, aiming to go beyond simple text generation. DeepSeek is expanding into multimodal learning, handling diverse input types such as images, audio and text for a more comprehensive understanding.
Limitations: Bias and context: Like all LLMs, DeepSeek is susceptible to biases in its training data. Some biases may be intentional for content moderation purposes, which raises important ethical questions. While efficient, DeepSeek could have limitations in handling extremely long texts or complex conversations.
Architecture and performance DeepSeek uses a “mixture of experts” architecture, employing specialized submodels for different tasks, enhancing efficiency and potentially reducing training data needs. DeepSeek has demonstrated competitive performance, comparable to established models in certain tasks, especially mathematics and coding.
Follow this page for latest news and analysis on DeepSeek.
The DeepSeek lesson -— success without relying on Nvidia GPUs
Feb. 3, 2025: During the past two weeks, DeepSeek unraveled Silicon Valley’s comfortable narrative about generative AI (genAI) by introducing dramatically more efficient ways to scale large language models (LLMs). Without billions in venture capital to spend on Nvidia GPUs, DeepSeek had to be more resourceful and learned how to “activate only the most relevant portions of their mode
Nvidia unveils preview of DeepSeek-R1 NIM microservice
Jan. 31, 2025: Nvidia stock plummeted after Chinese AI developer DeepSeek unveiled its DeepSeek-R1 LLM. Last week, the chipmaker turned around and announced the DeepSeek-R1 model is available as a preview NIM on build.nvidia.com. Nvidia’s inference microservice is a set of containers and tools to help developers deploy and manage gen AI models across clouds, data centers, and workstations.
Italy blocks DeepSeek due to unclear data protection
Jan. 31, 2025: Italy’s data protection authority Garante has decided to block Chinese AI model DeepSeek in the country. The decision comes after the Chinese companies providing the chatbot service failed to provide the authority with sufficient information about how users’ personal data is used.
How DeepSeek changes the genAI equation for CIOs
Jan. 30, 2025: The new genAI model’s explosion on the scene is likely to amp up competition in the market, drive innovation, reduce costs and make gen AI initiatives more affordable. It’s also a metaphor for increasing disruption. Maybe it’s time for CIOs to reassess their AI strategies.
DeepSeek leaks 1 million sensitive records in a major data breach
Jan. 30, 2025: A New York-based cybersecurity firm, Wiz, has uncovered a critical security lapse at DeepSeek, a rising Chinese AI startup, revealing a cache of sensitive data openly accessible on the internet. According to Wiz, the exposed data included over a million lines of log entries, digital software keys, backend details, and user chat history from DeepSeek’s AI assistant.
Microsoft first raises doubts about DeepSeek and then adds it to its cloud
Jan. 30, 2025: Despite initiating a probe into the Chinese AI startup, Microsoft added DeepSeek’s latest reasoning model R1 to its model catalog on Azure AI Foundry and GitHub.
How DeepSeek will upend the AI industry — and open it to competition
Jan. 30, 2025: DeepSeek is more than China’s ChatGPT. It’s a major step forward for global AI by making model building cheaper, faster, and more accessible, according to Forrester Research. While LLMs aren’t the only route to advanced AI, DeepSeek should be “celebrated as a milestone for AI progress,” the research firm said.
DeepSeek triggers shock waves for AI giants, but the disruption won’t last
Jan. 28, 2025: DeepSeek’s open-source AI model’s impact lies in matching US models’ performance at a fraction of the cost by using compute and memory resources more efficiently. But industry analysts believe investor reaction to DeepSeek’s impact on US tech firms is being exaggerated.
DeepSeek hit by cyberattack and outage amid breakthrough success
Jan. 28, 2025: Chinese AI startup DeepSeek was hit by a cyberattack, according to the company, prompting it to restrict user registrations and manage website outages as demand for its AI assistant soared. According to the company’s status page, DeepSeek has been investigating the issue since late evening Beijing time on Monday.
What enterprises need to know about DeepSeek’s game-changing R1 AI model
Jan. 27, 2025: Two years ago, OpenAI’s ChatGPT launched a new wave of AI disruption that left the tech industry reassessing its future. Now, within the space of a week, a small Chinese startup called DeepSeek has pulled off a similar coup, this time at OpenAI’s expense.
iPhone users turn on to DeepSeek AI
Jan. 27, 2025: As if from nowhere, OpenAI competitor DeepSeek has risen to the top of the iPhone App Store chart, overtaking ChatGPT’s OpenAI. It’s the latest in a growing line of genAI services and seems to offer some significant advantages, not least its relatively lower development and production costs.
Chinese AI startup DeepSeek unveils open-source model to rival OpenAI o1
Jan. 23, 2025: Chinese AI developer DeepSeek has unveiled an open-source version of its reasoning model, DeepSeek-R1, featuring 671 billion parameters and claiming performance superior to OpenAI’s o1 on key benchmark. “DeepSeek-R1 achieves a score of 79.8% Pass@1 on AIME 2024, slightly surpassing OpenAI-o1-1217,” the company said in a technical paper. “On MATH-500, it attains an impressive score of 97.3%, performing on par with OpenAI-o1-1217 and significantly outperforming other models.”
Source:: Computer World
By The Conversation Wearable devices have become a big part of modern health care, helping track a patient’s heart rate, stress levels and brain activity. These devices rely on electrodes, sensors that touch the skin to pick up electrical signals from the body. Creating these electrodes isn’t as easy as it might seem. Human skin is complex. Its properties, such as how well it conducts electricity, can change depending on how hydrated it is, how old you are or even the weather. These changes can make it hard to test how well a wearable device works. Additionally, testing electrodes often involves human volunteers,…This story continues at The Next Web
Source:: The Next Web
By Hisan Kidwai Borderlands 3 is a popular first-person shooter and action role-playing game. It revolves around the story…
The post Borderlands 3 SHiFT Codes: February 2025 appeared first on Fossbytes.
Source:: Fossbytes
By Hisan Kidwai Flying bikes have been a big part of pop culture for decades. Take the speeder bike…
The post Maviator SkyRacer X1: The Future of Urban Mobility is Here! appeared first on Fossbytes.
Source:: Fossbytes
By Thomas Macaulay As China’s DeepSeek threatens to dismantle Silicon Valley’s AI monopoly, a European alliance has emerged with an alternative to tech’s global order. They call their project OpenEuroLLM. Like DeepSeek, they aim to develop next-generation open-source language models — but their agenda is very different. Their mission: forging European AI that will foster digital leaders and impactful public services across the continent. To support these objectives, OpenEuroLLM is building a family of high-performing, multilingual large language foundation models. The models will be available for commercial, industrial, and public services. Over 20 leading European research institutions, companies, and high-performance computing (HPC) centres…This story continues at The Next Web
Source:: The Next Web
By Deepti Pathak If you enjoy playing Anime Showdown, you probably know how helpful free rewards can be. They…
The post Anime Showdown Codes: February 2025 appeared first on Fossbytes.
Source:: Fossbytes
OpenAI on Friday released the latest model in its reasoning series, o3-mini, both in ChatGPT and its application programming interface (API). It had been in preview since December 2024.
The company said in its announcement that it “advances the boundaries of what small models can achieve, delivering exceptional STEM capabilities — with particular strength in science, math, and coding — all while maintaining the low cost and reduced latency of OpenAI o1-mini.”
OpenAI said that o3-mini delivered math and factuality responses 24% faster than o1-mini, with medium reasoning effort, and testers preferred its answers to those generated by o1-mini more than half the time.
In addition, the announcement said, “while OpenAI o1 remains our broader general knowledge reasoning model, OpenAI o3-mini provides a specialized alternative for technical domains requiring precision and speed. In ChatGPT, o3-mini uses medium reasoning effort to provide a balanced trade-off between speed and accuracy. All paid users will also have the option of selecting o3-mini-high in the model picker for a higher-intelligence version that takes a little longer to generate responses. Pro users will have unlimited access to both o3-mini and o3-mini-high.”
The model is now available to users of ChatGPT Plus, Team, and Pro; Enterprise and Education users must wait another week. It will replace o1-mini in the model picker, providing higher rate limits and lower latency. OpenAI is tripling the rate limit for Team and Plus users from 50 messages per day (with o1-mini) to 150 messages per day with o3-mini. The company did not state usage limits for free plan users.
In addition, an early prototype of integration with search will find answers online, with links to their sources.
The model also offers new features for developers who incorporate OpenAI models in their software, including function calling, developer messages, and structured outputs. They can also choose one of three reasoning effort options — low, medium, and high — to adjust power and latency to suit the use case. However, unlike OpenAI o1, it does not support vision capabilities. The company said that o3-mini is available in the Chat Completions API, Assistants API, and Batch API now, to select developers in API usage tiers 3-5.
In addition to its performance, OpenAI touted the model’s safety. “Similar to OpenAI o1, we find that o3-mini significantly surpasses GPT-4o on challenging safety and jailbreak evaluations. Before deployment, we carefully assessed the safety risks of o3-mini using the same approach to preparedness, external red-teaming, and safety evaluations as o1.”
Source:: Computer World
By Thomas Macaulay A German startup plans to jumpstart European EVs with an AI-powered brain. Sphere Energy built the system to simulate battery behaviour. The company then predicts a power source’s lifetime in numerous scenarios, from driving styles to temperatures on the road. According to Sphere, the insights shrink the battery testing cycle by at least a year. Developing a car, meanwhile, could be completed “at least” twice as quickly. Sphere envisions endless benefits: manufacturers will save millions, car prices will plummet, and innovations will increase at exponential rates. The startup’s co-founder, Lukas Lutz, said the plans are unprecedented. “Nobody right now —…This story continues at The Next Web
Source:: The Next Web
By The Conversation The rapid spread of artificial intelligence has people wondering: who’s most likely to embrace AI in their daily lives? Many assume it’s the tech-savvy — those who understand how AI works — who are most eager to adopt it. Surprisingly, our new research (published in the Journal of Marketing) finds the opposite. People with less knowledge about AI are actually more open to using the technology. We call this difference in adoption propensity the “lower literacy-higher receptivity” link. This link shows up across different groups, settings, and even countries. For instance, our analysis of data from market research company Ipsos…This story continues at The Next Web
Source:: The Next Web
The Deepseek-R1 model has managed to attract a lot of attention in a short time, especially because it can be used commercially without restrictions.
Now, developers at Hugging Face are trying to reconstruct the generative AI (genAI) model from scratch and develop an alternative to Deepseek-R1 called Open-R1 based on open source code. Although Deepseek is often referred to as an open model, parts of it are not completely open.
“Ensuring that the entire architecture behind R1 is open source is not just about transparency, but about unlocking its full potential,” developer Elie Bakouch, of Hugging Face, told Techcrunch.
In the long run, Open-R1 could make it easier to create genAI models without sharing data with other actors.
Source:: Computer World
By The Conversation In 1981, American physicist and Nobel Laureate, Richard Feynman, gave a lecture at the Massachusetts Institute of Technology (MIT) near Boston, in which he outlined a revolutionary idea. Feynman suggested that the strange physics of quantum mechanics could be used to perform calculations. The field of quantum computing was born. In the 40-plus years since, it has become an intensive area of research in computer science. Despite years of frantic development, physicists have not yet built practical quantum computers that are well suited for everyday use and normal conditions (for example, many quantum computers operate at very low temperatures). Questions…This story continues at The Next Web
Source:: The Next Web
Earlier this week, we learned about Apple’s decision to appoint Kim Vorrath, the vice president of the company’s Technology Development Group (TDG), to help build Apple Intelligence under the supervision of John Giannandrea, Apple’s senior vice president for machine learning and AI.
Vorrath, who also serves at a board member at the National Center for Women in IT and sits on the Industrial Advisory Board at Cal Poly, has been with Apple since 1987. She’s taken leadership roles in iOS and OS X — she was even in charge of macOS at one time. Part of the original iPhone development team, she also supervised OS development for iPad, Mac and Vision Pro.
When it comes to bug testing and software quality control, she can say which features are ready to go and which are not. Vorrath also coordinates releases, not just for the specific platform (such as iPhone), but between devices, which means a great deal when you consider how integrated the Apple ecosystem has become.
Getting the band together
That established talent will be critical, given that Apple Intelligence features are also designed to work across the Apple ecosystem.
Of course, making these complex high tech products work well together takes effective organization. Vorrath brings that. She seems to be a person who can organize engineering groups and design effective workflows to optimize what those teams can do. With all these achievements, it is no surprise Vorrath is seen as one of the women who contributed the most to making Apple great.
In her new role, she joins Giannandrea, who allegedly “needs additional help managing an AI group with growing prominence,” Bloomberg reported.
Put it all together and it’s clear that Vorrath is one of Apple’s top fixers and joins the AI team at a critical point. First, she’s probably going to help get a new contextually-aware Siri out the door, and second, she’ll be making decisions around what happens in the next major iterations of Apple Intelligence.
It’s the next steps for Apple’s AI that I think have been missed in much of the coverage of this internal Apple shuffle.
Apple Intelligence 2.0
While people like to focus on Siri’s improvements and shortcomings, it must also be true that Apple hopes to maintain its traditional development cadence when it comes to Apple Intelligence.
That means delivering additional features and feature improvements every year, usually at WWDC. With the next WWDC looming fast, it might fall to Vorrath to select what additions are made, and to ensure they get developed on time.
Think logically and you can see why that matters. Apple announced Apple Intelligence at WWDC 2024, but it wasn’t ready to ship alongside the original release of operating system updates, and features were slowly introduced in the following months.
Arguably, the schedule didn’t matter. What does matter is that Apple, then seen as falling behind in AI, used Apple intelligence to argue for its own continued corporate relevance. It bought itself some time.
Now it must follow up on that time. That means making improvements and additions to show continued momentum. It comes down to delivering solutions consumers will want to use, with a little Apple magic alongside new developer tools to extend that ecosystem.
It has to succeed in doing this to maintain credibility in AI.
Is Apple going to keep relevant?
Getting that right — particularly across all Apple’s platforms and in good time — is challenging, and is most likely why Vorrath has been brought in. There’s so much riding on getting the mix right. Apple needs to be able to say “Hey, We’re not done yet with Apple Intelligence,” and back that claim up with tools to keep users’ interest. Those new AI services need to work well, ship on time, and work so people won’t even know how much they needed them until they use them.
Getting that mix right is going to take skill, dedication, and discipline. In the coming months, all eyes will be on Apple as critics and competitors wait to find out whether Apple Intelligence was a one shot attempt at maintaining relevance, or the first steps of a great company about to find its AI feet.
Making sure it is the second, and not the first, should be the fundamental mission Vorrath has taken on in her new role.
You can follow me on social media! Join me on BlueSky, LinkedIn, Mastodon, and MeWe.
Source:: Computer World
Click Here to View the Upcoming Event Calendar