The future of wearables may reside on your finger as opposed to your wrist. Here are our favorite smart rings.
Source:: Digital Trends
Downloading a transcript from a YouTube video can be helpful for many reasons. It makes reviewing…
The post How to Download the Transcript of a YouTube Video? appeared first on Fossbytes.
Source:: Fossbytes
By Hisan Kidwai
We all agree that downloading a new game from Steam is the most exciting thing. But…
The post 5 Best Ways to Make Steam Downloads Faster: 2024 Guide appeared first on Fossbytes.
Source:: Fossbytes
A warming world will — and is already having — a profound impact on the things we all depend on: shelter, food, water, energy, medicine. Most nations have committed to drastic cuts in greenhouse gas emissions to dial back the planet’s thermostat. But true sustainability is not just about emissions. We will need to transform the way all industries operate — from agriculture to transport and health — to meet the SDGs. The great green transition necessitates innovation. It calls for new, clean technologies and the scaling of proven ones. It requires industry leaders, disruptive innovators, and ambitious startups to…
This story continues at The Next Web
Source:: The Next Web
The Cybersecurity Association of China (CSAC) has urged a security review of Intel products sold in the country, claiming the US semiconductor firm poses ongoing threats to China’s national security and interests.
In a statement posted on its WeChat account, CSAC said that Intel’s major product quality and security management flaws indicate its extremely irresponsible attitude toward customers.
CSAC is an industry body, but its allegations raise concerns >about a potential security review and subsequent action by the country’s cyberspace regulator, the Cyberspace Administration of China (CAC). Last year, CAC banned products from Micron, citing national security risk.
Significantly, this year Intel has secured orders for its Xeon processors from several Chinese state-affiliated agencies for AI applications, according to Reuters.
This action marks the latest chapter in the ongoing trade conflict, which has seen US administrations ban Chinese-made hardware from domestic networks and impose export controls to limit China’s access to advanced computing technologies.
A ban could deal a significant blow to Intel, already struggling with financial challenges, a shrinking market share, and layoffs. It could also affect Chinese companies that are already contending with US export restrictions.
“If the CAC decides to take more drastic action than it did with Micron, Intel could face significant challenges with its sales and market share in China,” said Thomas George, president of Cybermedia Research. “This situation also poses a risk for the numerous companies in China that rely on Intel chips for high-performance computing (HPC), which is essential for scientific research, financial services, and even national security.”
A potential review and subsequent action from the CAC could substantially impact Intel’s strategies and market position, while also reshaping the strategic considerations of other key players in the semiconductor industry, George noted.
Other analysts also highlight that the broader impact would likely extend beyond Intel, affecting the industry as a whole.
“The sanctions will definitely have repercussions and a short-term impact on Intel,” said Pareekh Jain, CEO of Pareekh Consulting. “But although rivals like AMD might see some initial benefit, eventually they will likely be targeted as well. The medium-term goal seems to be to bolster China’s domestic chip industry.”
China has been pushing for self-sufficiency in the semiconductor sector, recently urging domestic car manufacturers such as SAIC Motor, BYD, Dongfeng Motor, GAC Motor, and FAW Group to boost their sourcing of automotive-related chips from local suppliers.
“Chinese firms like Huawei and Alibaba are accelerating their investments in semiconductor technologies,” George said. “Despite these ambitions, the readiness and capability of domestic alternatives to match Intel’s offerings remain uncertain.”
One potential way for US companies to overcome this challenge would be to invest more in China, Jain suggested, citing Tesla as an example.
“This strategy would demonstrate a commitment to the Chinese market and may provide some protection from retaliatory actions,” Jain added. “Essentially, US companies can position themselves as commercial entities with interests in China, separate from US government actions.”
Source:: Computer World
By Hisan Kidwai
While the term “Windows for ARM” has been around for over a decade, it never gained…
The post ASUS Vivobook S15 OLED Review: The Best Windows for ARM Laptop? appeared first on Fossbytes.
Source:: Fossbytes
Corporate law is nothing like you see on television. To prepare for a case, 150 attorneys might be tasked to travel to remote warehouses to comb through tens of millions of documents gathering dust or track down amorphous electronic communications. It’s a process known as discovery.
For more than a decade, law firms have been using machine learning and artificial intelligence tools to help them hunt down paper trails and digital documents. But it wasn’t until the arrival two years ago of OpenAI’s generative AI (genAI) conversational chatbot, ChatGPT, that the technology became easy enough to use that even first-year associates straight out of law school could rely on it for electronic discovery (eDiscovery).
Today, you’d be hard pressed to find a law firm that hasn’t deployed genAI, or isn’t at the very least kicking the tires on its ability to speed discovery and reduce workloads.
For all intents and purposes, no one practicing law today studied AI in school, which means it falls to firms to integrate the fast-evolving tech into their workplaces and to train young lawyers on matching AI capabilities to client needs while remaining accountable for its output. This is the essence of turning AI into a copilot for all manner of chores, from wading through data to analyzing documents to improving billing.
In that vein, longtime IT workers are no longer just on call for computer glitches and AV setups; they have moved to the forefront of running a law firm, handling AI’s role in winning cases, retaining clients, growing revenue and, inevitably, helping attract the best and brightest new talent. Multinational law firm Cleary Gottlieb is a prime example of that.
Cleary has been able to dramatically cull the number of attorneys used for pre-trial discovery and has even launched a technology unit and genAI legal service: ClearyX. (ClearyX is essentially an arbitrage play — an alternative legal service provider [ALSP] for offshoring eDiscovery and automating electronic workflows.)
While Cleary readily admits that genAI isn’t perfect in retrieving 100% of the documents related to a case or always creating an accurate synopsis of them, neither are humans. At this point in the technology’s development, it’s good enough most of the time to reduce workloads and costs.
Still, cases do pop up that can be more expensive when customizing a large language model to suit specific needs than deploying those dozens of eager attorneys seeking to prove themselves.
Computerworld spoke with Christian “CJ” Mahoney, counsel and global head of Cleary’s e-Discovery and Litigation Technology group, and Carla Swansburg, CEO of ClearyX, about how the firm uses genAI tools. The following are excerpts from that interview:
Why is AI being adopted in the legal profession? Mahoney: “Because the legal profession is seeing an explosion of information and data created by their clients, and it’s become increasingly challenging to digest that information strictly through a team of attorneys looking through documents. That explosion probably started two decades ago. It’s been growing more and more challenging.
“I just had a case where we were measuring the amount of data we looking at, and for one case, we had 15 terabytes we had to analyze. It was over 50 million documents, and we had to do it in matter of weeks to find out what had to provide to the opposing party.
“Secondly, we wanted to find out what’s interesting in documents and what supported our advocacy. Traditional ways for looking through that type of information and getting a grasp of the case is really not feasible anymore. You need to incorporate AI into the process for analysis now.”
Swansburg: “One of the big shifts with OpenAI and genAI, in particular, is for the first time there’s ubiquity. Everyone’s hearing about it. Secondly, sophisticated clients are starting to approach it — even the formerly untouched Wall Street firms and other large firms with an eye on cost sensitivity.
“Fast forward to now. There’s a bit of an expectation that with the advent of genAI, things should be quicker and cheaper. Second of all, [there’s] the accessibility of AI through natural language processing. The third thing is the explosion of purpose-designed tools for the legal profession, and that does go back about a decade when you had diligence tools and tools for contract automation.”
Christian “CJ” Mahoney, counsel and global head of Cleary’s e-Discovery and Litigation Technology group, and Carla Swansburg, CEO ClearyX
Cleary Gottlieb
How have the expectations of clients changed regarding the use of genAI? Swansburg: “A year-and-a-half ago, we were getting messaging from clients saying, ‘You’d better not be using AI because it seems really risky.’ Now, we’re getting requests from clients asking, ‘How are you using AI to benefit me and how are you using it to make your practices more efficient for me?’
“There’s a lot of changing dynamics. Legal firms that were historically reluctant to embrace this technology are asking for it — ‘When can I get some of this generative AI to use in my practice?’”
How has the job of an attorney changed with genAI? Swansburg: “Nobody went to law school to do this. I used to go through banker’s boxes with sticky notes as a litigator. Nobody wants to do that. Nobody wants to read 100 leases to highlight an assignment clause for you. The good thing is [genAI is] moving up the value chain, but it’s starting with things that people really don’t want to be doing anyways.”
Is genAI replacing certain job titles, filling job roles? Mahoney: “I’d say we’re not to the place where it’s replacing entire categories of jobs. It’s certainly making us more efficient such that if I would have needed a team of 60 attorneys on work I’m doing, I may need a team of about 45 now. That’s the type of efficiency we’re talking about.
“I had over 60 [attorneys] just this weekend working on just one case. It’s the big data explosion of evidence there is to comb through.
“We’re using more complex workflows using AI. I said I saw a 60-person to 45-person reduction. But, on this kind of case, I would have had probably 150 attorneys doing this 15 years ago. Back then, it would just be like ‘OK guys, here’s a mountain of evidence — go through it.’
“Now, we are using several AI strategies to help classify documents for what we need to turn over to help narrow the amount of content we have to look over. It’s helping us to summarize before we even look at the documents, so that we have a summary going in to help us digest the information faster.”
Swansburg: “In my world, it’s not really replacing jobs yet, but it’s changing how you do jobs. So, it’s allowing people to move up the value chain a little bit. It’s taking away rote and repetitive work.
“Our experience has been — and we’ve kicked tires on a lot of language models and purpose-designed tools — [genAI tools] are not good enough to replace people for a lot of the work we do. For something like due diligence…, you often must be right. You need to know whether you can get consent to transfer something. In other use cases, such as summarization and initial drafting, that sort of thing is a little more accessible.”
What does that big data you’re discovering look like? Is it mostly unstructured? Mahoney: “Most of my data sets are unstructured. We’re talking about email and messages on someone’s laptop or a portion of a document repository on a file server. These days, we’re talking about chats on platforms like Teams or mobile devices. Often, we’ll target those collections through good attorney investigations, but a lot of times we have unstructured data sources like mailboxes to comb through. What we’re doing there is use a large language model algorithm.
“We are reviewing some samples, some of them random and some of them with training approaches we developed to target documents we think will help the model understand what we’re trying to teach it quicker. We’re reviewing a few thousand documents to train the model to predict if a document is responsive to the other [opposing] side’s document requests. We’re then running that model over millions of documents. We find throughout iterative model training improvement processes, we are approaching and sometimes surpassing the type of performance we’d expect by that team of 150 attorneys looking at all these documents.
“So, we use that as our starting point and sometimes our only process for identifying what we need to deliver to the other side. But once we have that set, we are using similar processes to identify things like attorney-client privilege in the document. And again, to identify which of these documents are interesting and useful for our advocacy.
“Now we’re also coupling that with generative AI workflows where, in addition to this training strategy, we’ve identified small samples of the [document] universe; we’re also seeing prompt-based genAI queries on portions of the data set to find documents that support our advocacy.”
Have you found other uses for AI that you didn’t initially expect? Mahoney: “We’re using genAI to look at files that we could have never used old school keyword searches on because they don’t have any text in them. They could be images or movies. We created a genAI process using some of the really new algorithms out there to analyze things like images and video files for finding more interesting information.
“We’ve also created genAI workflows when we claim attorney-client privilege; we have to create a whole attorney-client privilege log. We’ve created genAI workflows to help us draft the privilege log. It’s the same concept as using genAI to summarize a document. We’re using it to summarize the privileged portion of a document, but summarize it in a way that we’re meeting our privilege log obligation without revealing what the privilege advice is.
“Then a lot of our human-in-the-loop practices are taking a look of those AI results and doing validation, making some improvements here and there, rather than relying entirely on the AI. The level of that validation depends on what the task is.”
AI has the tendency to go off the rails with errors and hallucinations. How do you address that? Swansburg: “In CJ’s world, they work off of percentages — like 80% accurate. For us, largely we need to be 100% accurate. A lot of what we do, whether it’s contract analysis and management or transactional diligence, we have a context set of materials. So, the potential for hallucinations is more limited. Having said that, some of the tools in market will still hallucinate. So, you’ll say, ‘Find me the address of the leased property’ and it’ll totally make something up.
“One of the key things we do, and some of the development work we’re doing, is to say, ‘Show me in the document where that reference is.’ So, there’s a quick and easy way to validate information. You’ve got a reference; you tell me what it says. You’re extracting a piece of it, so we have a really fast way to validate.
“For us, it’s always a discrete set of context documents. So, we can first of all solve through prompting and tailoring it to which set of documents they want us to use, but second of all always confirming there’s always a way to ensure the provenance of the information.
“Some of the work we’re doing is we’ve developed a way to prompt a model to tell us when the termination data of an NDA is? If a person’s reading it, they can usually tell. But NDAs have an effective date and then they have a term that can be written in any number of ways: two years, three years, and then there are often continuing obligations.
“So if you just said, ‘When does this NDA terminate?’ a lot of AI models will get it wrong. But if you generate a way to say, ‘Find me the effective date, find me a clause, find me the period of time or continuing obligations,’ it’s typically 100% accurate. It’s a combination of focused context documents, proper prompt engineering and a validation process.”
Are you using retrieval augmented generation (RAG) to fine-tune these models, and how effective has it been at that task? Mahoney: “We are using RAG to put guardrails on how the large language model responds and what it’s looking at in its response. I think at times that’s certainly a helpful tool to use on top of the LLM.
“I’d also say even though we are more aggressively using LLMs and genAI in the discovery space, the process Carla described looks exactly the same. The difference would be our tolerance for errors, as part of that validation process.
“That’s comparing it to what human results would look like. What we find historically on various tasks in electronic discovery over several decades — humans usually get things right about 75% of the time. So, when we’re looking at LLMs and genAI, we want to be careful it’s working well, but we also want to be careful that we’re not holding it to too high a standard.
“If you’re writing a brief, 75% accuracy would be horrible and unacceptable. But when you’re looking through two million documents, that might be perfectly acceptable. That’s how the process looks a little different, even though the structure of the process looks the same in terms of steps.”
Small language models as opposed to large proprietary models from Amazon, Meta, and OpenAI are growing in popularity because you can create a model for every application need. What kinds of AI models do you use? Mahoney: “We’ve actually been using open large language models for five years now. We started with what was the largest language model at the time, but it’s probably closer to a small language model now. We use a version of BERT a lot when we’re doing supervised learning.
“We are very LLM agnostic, in that we’re able to look at the different tasks and see which one is right for a particular task. For image analysis, or the multimedia analysis, we’re using the latest and greatest, such as the ChatGPT Omni. It’s unique in having capabilities for drafting [client-privilege] log lines. Depending on the data, we’re shifting between GPT-4 or GPT-3.5 Turbo.
“We’re actually looking at where we’re getting reasonable performance and comparing that to things like costs.”
Is price an issue you consider when adopting a model? Mahoney: “Different LLMs have very different price points. For some of our data sets, the way GPT 3.5 Turbo is performing log lines is actually quite good. So, we wouldn’t want to spend the extra money on GPT-4 there.
“On the small language model front, I’d say we’re doing tuning rather than a separate small language model for each application…. We’re taking an existing model — but where we have an industry that might look very different than what that model was built on — [and] we’re doing some fine tuning on top of that to introduce the model to a dataset before it starts making predictions on it.”
So, essentially, some LLMs are better at some tasks than others? Mahoney: “Some language models are better at certain tasks in summarizing or pinpointing whatever it is. Ideally you have a workflow with six steps and you’re using a different LLM at different steps. You never know who’s going to emerge tomorrow and being better at X or Y.
“We’ve been using OpenAI [LLMs] before it was publicly launched. And we’ve been testing Meta and Claude and using the ones that we think make the most sense for a particular task.”
Data scientists and analysts, prompt engineers — what roles do you have or have you added to address your LLM needs? Swansburg: “For the work CJ does, and the work we do, the larger the data set, the more the need for data scientists. So, he does work with data scientists on his side.
“On my side, in terms of prompt engineers, we have good software developers that can do that for you. We have people who are pure developers, and we have people who sit in the middle that we call ‘legal technologists.’ Those are the translators who take client and lawyer requirements and feed those back and do the customization to the platforms we build.
“We don’t have any data scientists yet, because we use discrete data sets. So it’s more about being able to engineer the prompts — and the team we have now has been able to do that on the developer side. As we grow, and right now we’re recruiting another half-dozen developers, we will get more nuanced and look for people with prompt engineering experience and building APIs with LLMs and other tools.
“So, it’s constantly changing.”
Are you’re mostly using proprietary rather than open-source models? Mahoney: “Right now, we’re just using proprietary models and plugging them in and testing them — OpenAI being the more common example. We’re building things through prompts like contract determination dates to extract that data we need and building bundles of questions that will be generated based on the automatic determination of what the system in ingesting. All of that is being tested now.
“Some of them are really expensive. Something like ChatGPT is very accessible. Even the enterprise models can do the trick, and they’re accessible and affordable. “
If legal departments and law firms were already using AI and ML, why is ClearyX needed? Swansburg: “We’re trying to build a model that’s a lot less expensive than contract management software…and to have much higher quality than a lot of providers and provide a service.
“A lot of companies don’t have people to own and operate these programs. So, they have shelfware. They buy a contract lifecycle management tool, and it take three years to get their return on investment; then people don’t use it because it’s not custom designed. So, we’re trying to build custom solutions for clients that work the way they work, and that are affordable.
“We’re not venture capital owned. We’re owned by the partnership, so we’re able to build things in the right way. We’re not just serving clients of the Cleary law firm; we also have a mandate to get outside clients.
“We started thinking we weren’t going to be a development shop. We were going to use existing solutions and weave them together using APIs, but a couple things happened. The tools on the market weren’t doing what we wanted them to do. We weren’t able to customize them in the nuanced way that made clients actually delighted to use them.
“The other is the ubiquity of AI, and the ability to customize them is way easier than it was three years ago. So, over the last eight months or so, we’ve been able to pivot to something that allows us to customize it more easily and collaborate with clients to figure out how they want it to work.”
Source:: Computer World
German “teledriving” startup Vay has secured €34mn from the European Investment Bank (EIB). In January, Vay launched a commercial remote-controlled car service in Las Vegas. Now it wants to roll out the technology on its home turf. In 2023, the company successfully conducted test drives without a safety driver on public roads in Hamburg. Vay says it has been working closely with authorities to launch a commercial service in the German city. “This investment will play a crucial role in strengthening the confidence and trust that EU regulators, partners and consumers have in Vay, paving the way for the commercial…
This story continues at The Next Web
Source:: The Next Web
The potential split up of Google that’s been proposed by the US Department of Justice (DOJ) could weaken the company, and thus the position of the US in its tech war with China, said former President Donald Trump, who suggested he may not break up the company if he wins the presidency again in November.
In comments made while speaking Tuesday at an event with Bloomberg News during a meeting of the Economic Club of Chicago, Trump said, “China is afraid of Google,” according to a report of the event in the New York Times. He went on to wonder whether splitting Google would “destroy” it, and thus also diminish the US competitively against China. The US and China are at war over tech supremacy, and the US has imposed trade restrictions on the export of technology to the country.
Trump’s comments are somewhat ironic, given that it was his administration that brought an antitrust suit against Google in 2020, weeks before the presidential election. The DOJ argued at the time that Google had illegally maintained a monopoly in the online search business by paying companies like Apple to make it the default search engine on smartphones and in web browsers.
Last week, the notion that Google would be split up became more realistic after the release of a proposal by the DOJ, which said it “is considering behavioral and structural remedies that would prevent Google from using products such as Chrome, Play, and Android to advantage Google search and Google search-related products and features … over rivals or new entrants,” according to a court filing.
The department said that Google’s longstanding control of the Chrome browser, with its preinstalled Google search default, “significantly narrows the available channels of distribution and thus disincentivizes the emergence of new competition.”
The DOJ also said it would target Google’s revenue-sharing agreements with device makers and telecom companies that spurred the case in the first place in its remedies. These deals have kept Google as the default search engine on the vast majority of devices globally, effectively blocking competitors from gaining market share.
Google did not immediately respond to a request for comment Wednesday, either on Trump’s remarks, or on its position on the DOJ proposal.
That split now seems more likely if Vice President Kamala Harris wins the upcoming election, as Democratic administrations traditionally have been on the side of consumer protection and thus splitting up companies with too much power, noted Brad Shimmin, chief analyst, AI and data analytics, at Omdia. Republicans, on the other hand, tend to favor letting large corporations with monopoly market shares remain as they are, he said.
Shimmin and other experts said a split like the one that the DOJ has proposed would offer consumers and enterprises more choice in terms of which technology they use and/or bundle with products. “I think that capitalism thrives upon a bit of chaos and diversity,” he said, adding that breaking up Google would be a win for consumer protection.
“Anytime you have a very solid position with a dominant player, it really quells innovation and quells enrichments, and you end up with a zero-sum game,” he said. That’s because once a company has a dominant position that can’t be challenged, there is little accountability for product and/or service quality, so “companies simply test the bounds of tolerance” with their customers, Shimmin said.
While Trump might favor ensuring Google plays fair instead of breaking up the company, according to his comments reported by the Times, this may not be enough to encourage fair competition, noted another industry expert.
“The fundamental problem with big tech is the economic perversities of monopoly power,” said John Bambenek, president at Bambenek Consulting. “Sure, regulation can help, but if the problem is too extreme, splitting companies up is the only solution to maintain viable capitalism.”
Indeed, capitalism always runs the risk of one company playing the fair market game better than others, which means that regulators sometimes need to step in to rebalance the system. This doesn’t mean the US will lose its edge against global competitors like China, even if that country has more control over its technology development due to its government structure, Bambenek said.
“Communist and autocratic economies, of course, take a different approach,” he said. “However, I still believe we can have both a free market with competition and still be innovative and maintain our tech dominance.”
Source:: Computer World
While we wait for the Age Of Apple Intelligence, it may be worth considering a recent Apple research study that exposes critical weaknesses in existing artificial intelligence models.
Apple’s researchers wanted to figure out the extent to which LLMs such as GPT-4o, Llama, Phi, Gemma, or Mistral can actually engage in genuine logical reasoning to reach their conclusions/make their recommendations.
The study shows that, despite the hype, LLMs (large language models) don’t really perform logical reasoning — they simply reproduce the reasoning steps they learn from their training data. That’s quite an important admission.
“Current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data,” the Apple team said.
They found that while these models may seem to show logical reasoning, even the slightest of changes in the way a query was worded could lead to very different answers. “The fragility of mathematical reasoning in these models [shows] that their performance significantly deteriorates as the number of clauses in a question increases,” they warned.
In an attempt to overcome the limitations of existing tests, Apple’s research team introduced GSM-Symbolic, a benchmarking tool designed to assess how effectively AI systems reason.
The research does show some strength in the models that are available today. For example, ChatGPT-4o still achieved a 94.9% accuracy rate in tests, though that rate dropped significantly when researchers made the problem more complex.
That’s good so far as it goes, but the success rate nearly collapsed — down as much as 65.7% — when researchers modified the challenge by adding “seemingly relevant but ultimately inconsequential statements.”
Those drops in accuracy reflect the limitation inherent within current LLM models, which still basically rely on pattern matching to achieve results, rather than making use of any true logical reasoning. That means these models “convert statements to operations without truly understanding their meaning,” the researchers said.
Commenting on Apple’s research, Gary Marcus, a scientist, author, AI critic, and professor of psychology and neural science at NYU, wrote: “There is just no way you can build reliable agents on this foundation, where changing a word or two in irrelevant ways or adding a few bit of irrelevant info can give you a different answer.”
Professor Marcus also pointed to some other tasty hints that Apple’s findings are correct, including an Arizona State University analysis that shows LLM performance declines as problems become greater and the inability of chatbots to play chess without making illegal moves.
All the same, the high accuracy displayed when using these machines for more conventionally framed problems suggests that, while fragile, AI will be of use as an adjunct to human decision-making.
At the very least, the data suggests that it is unwise to place total trust in the technology, as there is a tendency to failure when the underlying logic the models derive during training is stretched. It seems that AI doesn’t know what it is doing and lacks the degree of self-criticism it takes to spot a mistake when it is made.
Of course, this lack of logical coherence may be great news for some AI evangelists who frequently deny that AI deployment will cost jobs.
Why?
Because it provides an argument that humans will still be required to oversee the application of these intelligent machines. But those skilled human operators capable of spotting logical errors before they are put into action will probably need different skills than those used by the humans AI moves aside.
Writing in an extensive social media post explaining the report, Apple researcher Mehrdad Farajtabar warned:
“Understanding LLMs’ true reasoning capabilities is crucial for deploying them in real-world scenarios where accuracy and consistency are non-negotiable — especially in safety, education, health care and decision making systems. Our findings emphasize the need for more robust and adaptable evaluation methods. Developing models that move beyond pattern recognition to true logical reasoning is the next big challenge for the AI community.”
I think there is another challenge as well. Apple’s research team perhaps inadvertently showed that existing models simply apply the kind of logic they have been trained to use.
The looming problem with that is the extent to which the logic chosen for use when training those models may reflect the limitations and prejudices of those who pay for the creation of those models. As those models are then deployed in the real world, this implies that future decisions taken by those models will maintain the flaws (ethical, moral, logical, or otherwise) inherent in the original logic.
Baking those weaknesses into AI systems used internationally on a day-to-day basis may end up strengthening prejudice while weakening the evidence for necessary change.
To a great extent, even within recent AI draft regulations, these big arguments remain completely unresolved by starry-eyed governments seeking elusive chimeras of economic growth in an age of existentially challenging crisis-driven change.
If nothing else, Apple’s teams have shown the extent to which current belief in AI as a panacea for all evils is becoming (like that anti-Wi-Fi amulet currently being sold by one media personality) a new tech faith system, given how easily a few query tweaks can generate fake results and illusion.
In the end, it really shouldn’t be controversial to think that we don’t want AI systems in charge of public transportation (including robotaxis) to end up having accidents merely because the sensors picked up confusing data that their inherent model just couldn’t figure out.
In a world of constant possibility, unexpected challenge is normal, and garbage in does, indeed, become garbage out. Perhaps we should be more deliberate in the application of these new tools? The public certainly seems to think so.
Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.
Source:: Computer World
In November 2023, violent Atlantic storm “Domingos” struck the northern coast of Portugal, generating record-high waves and leaving a path of destruction across much of Western Europe. People on land were grappling with flooded homes, closed roads, and landslides. But just offshore, a potentially game-changing wave energy device was happily bobbing up and down, side to side — seemingly, in its element. Built by Swedish startup CorPower, the giant golden buoy turns the raw power of the ocean into a clean, reliable electricity source. CorPower claims its tech is at least five times more efficient than the previous state-of-the-art. “We’ve…
This story continues at The Next Web
Source:: The Next Web
Stockholm-based node.vc has closed a €71mn fund to back early-stage startups in the Nordics. “The Nordic tech ecosystem is thriving, especially in areas like AI, gaming, fintech, and climate tech,” John Elvesjö, managing partner at node.vc, told TNW. “We’re seeing experienced talent, particularly from companies like Klarna, Spotify, Voi, Kry, and Pleo, stepping up to become founders,” Elvesjö said. The devaluation of employee stock options and increasing layoffs have sparked “fresh entrepreneurial energy,” he added. The fund is sector-agnostic. This means that any startup with “innovative technology” can apply. The size of initial investment per company will be around €1-2mn,…
This story continues at The Next Web
Source:: The Next Web
Semiconductor rivals Intel and AMD announced the formation of an x86-processor advisory group that will try to address ever-increasing AI workloads, custom chiplets, and advances in 3D packaging and system architectures.
Members of the x86 Ecosystem Advisory Group include Broadcom, Dell, Google, Hewlett Packard Enterprise, HP, Lenovo, Meta, Microsoft, Oracle, and Red Hat. Notably missing: TSMC — the world’s largest chipmaker. Linux creator Linus Torvalds and Epic Games CEO Tim Sweeney are also members.
The mega-tech companies plan to collaborate on architectural interoperability and hope to “simplify software development” across the world’s most widely used computing architecture, according to a news announcement.
“We are on the cusp of one of the most significant shifts in the x86 architecture and ecosystem in decades — with new levels of customization, compatibility and scalability needed to meet current and future customer needs,” Intel CEO Pat Gelsinger said in a statement.
Generative AI (genAI) is moving into smartphones, PC, cars and Internet of Things (IoT) devices because the processing power on edge devices can access data locally, return faster results, and they’re more secure.
That’s why, over the next several years, silicon makers are turning their attention to fulfilling the promise of AI at the edge, which will allow developers to essentially offload processing from data centers — giving genAI app makers a free ride as the user pays for the hardware and network connectivity.
Apple, Samsung, and other smartphone and silicone manufacturers are rolling out AI capabilities on their hardware, fundamentally changing the way users interact with edge devices. On the heels of Apple rolling out an early preview of iOS 18.1 with its first generative AI (genAI) tools, IDC released a report saying nearly three in four smartphones will be running AI features within four years
The release of the next version of Windows — perhaps called Windows 12 — later this year is also expected to be a catalyst for genAI adoption at the edge; the new OS is expected to have AI features built in.
At the 2024 Consumer Electronics Show in April, PC vendors and chipmakers showcased advanced AI-driven functionalities. But despite the enthusiasm generated by those selling or making genAI tools and platforms, enterprises are expected to adopt a more measured approach over the next year, according to one Forrester Research report.
“CIOs face several barriers when considering AI-powered PCs, including the high costs, difficulty in demonstrating how user benefits translate into business outcomes, and the availability of AI chips and device compatibility issues,” said Andrew Hewitt, principal analyst at Forrester Research.
Source:: Computer World
As expected, Apple has introduced a much faster Apple Intelligence-capable iPad mini equipped with the same A17 Pro chip used in the iPhone 15 Pro series. That’s a good improvement from the A15 Bionic in the previous model, and makes for faster graphics, computation, and AI calculation.
It also sets the scene for the public release of the first Apple Intelligence features on Oct. 28, when I expect all of Apple’s heavily promoted wave of current hardware ads to at last make more sense. (We can also expect new Macs before the end of October.)
By announcing the new mini by press release, Apple broke with tradition twice with this heavily telegraphed (we all expected it) product iteration.
First, in what from memory seems a fairly rare move, Apple unveiled the new hardware right after a US holiday; second, the release wasn’t flagged by Apple industry early-warning system Mark Gurman, though he did anticipate an October update. The introduction of a highly performant Apple tablet is likely to further accelerate Apple’s iPad sales, which increased 14% in Q2 2024, according to Counterpoint. Apple will remain the world’s leading tablet maker, and reports earlier about the death of this particular component of Apple’s tablet range proved unfounded.
At first glance, the new iPad mini will seem familiar to most users. The biggest change is pretty much an updated chip inside a similar device, with the same height, width, and weight as the model it replaces. Available in blue, purple, starlight, and space gray, the iPad mini has an 8.3-in. Liquid Retina display, similar to before. Remarkably, pricing on the new models starts at $499 for 128GB storage — which is twice the storage at the same starting price as the 2021 iPad mini this one replaces.
There are other highlights here.
The A17 Pro processor means the iPad mini now has a 6-core CPU, which makes for a 30% boost in CPU performance in comparison to the outgoing model. You also get a 25% boost to graphics performance, along with the necessary AI-based computation capability enhancements required to run Apple Intelligence. Of course, the chip is far more capable of handling the kind of professionally focused apps used by designers, pilots, or doctors.
While we all recognize at this stage that Apple’s decision to boost all its products with more powerful chips is because it wants to ensure support for Apple Intelligence, this also means you get better performance for other tasks as well. All the same, it will be interesting to discover the extent to which a far more contextually-capable Siri and the many handy writing assistance tools offered by Apple’s AI will boost existing tablet-based workflows in enterprise, education, and domestic use.
If you use your iPad for work, it is likely to be good news that the new iPad mini has a 12-megapixel (MP) back camera and 12MP conferencing camera. While the last-generation model also boasted 12MP cameras, the 5x digital zoom is a welcome enhancement, while the 16-core Neural Engine inside the iPad mini’s chip means those images you do capture are augmented on the fly by AI to improve picture/video quality. Overall, you’ll get better results when taking images or capturing video.
“There is no other device in the world like iPad mini, beloved for its combination of powerful performance and versatility in our most ultraportable design,” said Bob Borchers, Apple’s vice president of Worldwide Product Marketing. “iPad mini appeals to a wide range of users and has been built for Apple Intelligence, delivering intelligent new features that are powerful, personal, and private.
“With the powerful A17 Pro chip, faster connectivity, and support for Apple Pencil Pro, the new iPad mini delivers the full iPad experience in our most portable design at an incredible value.”
In common with all its latest product, Apple is applying every possible focus on AI tools, making crystal clear its plans to continue investing in its unique blend of privacy and the personal augmentation promised by its human-focused AI. The current selection of tools the company is providing should really be seen as a beginning of this part of its new journey.
Additional improvements in the new iPad mini include:
There’s an environmental mission visible in the product introduction, too. The new iPad uses 100% recycled aluminium in its enclosure along with 100% recycled rare earth elements in all its magnets and recycled gold and tin in the printed circuit boards.
Please follow me on LinkedIn, Mastodon, or join me in the AppleHolic’s bar & grill group on MeWe.
Source:: Computer World
Munich-based startup OroraTech has secured €25mn in funding to scale up its AI-powered wildfire detection system. Korys, the investment arm of the Colruyt’s — a Belgian noble family — led the funding round. The EU’s Circular Bioeconomy Fund (ECBF) also chipped in, alongside existing investor Bayern Kapital. OroraTech will use the fresh funding to fuel the next phase of its growth. The company looks to expand into global markets beyond Europe, and keep refining its technology. OroraTech’s so-called Wildfire Solution collates imagery from its own probes, as well as over 20 other Earth observation satellites. The startup has trained an…
This story continues at The Next Web
Source:: The Next Web
Adobe’s AI model for video generation is now available in a limited beta, enabling users to create short video clips from text and image prompts.
The Firefly Video model, first unveiled in April, is the latest generative AI model Adobe has developed for its Creative Cloud products — the others cover image, design and vector graphic generation.
From Monday, there are two ways to access the Firefly Video model as part of the beta trial.
One is the text and image to video generation that Adobe previewed last month, accessible in the Firefly web app at firefly.adobe.com. This enables users to create five-second, 720p-resolution videos from natural-language text prompts. These can contain realistic video footage and 2D or 3D animations. It’s also possible to generate video using still images as a prompt, meaning a photograph or illustration could be used to create b-roll footage.
To provide greater control over the output, there are options for different camera angles, shot size, motion and zoom, for example, while Adobe says it’s working on more ways to direct the AI-generated video.
Adobe said it only trains the video model on stock footage and public domain data that it has rights to use for training its AI models. It won’t use customer data or data scraped from the internet, it said.
To access the beta, you’ll need to join the waitlist. It’s free for now, though Adobe said in a new release that it will reveal pricing information once the Firefly Video model gets a full launch.
Adobe is one of several technology companies working on AI video generation capabilities. OpenAI’s Sora promises to let users create minute-long video clips, while Meta recently announced its Movie Gen video model and Google unveiled Veo back in May. However, none of these tools are publicly available at this stage.
The other way to access the Firefly Video model is with the Generative Extend tool, available in beta in video editing app Premiere Pro. Generate Extend can be used to create new frames to lengthen a video clip — although only by a couple of seconds, enabling an editor to hold a shot longer to create smoother transitions. Footage created with Generative Extend must be 1920×1080 or 1280×720 during the beta, though Adobe said its working on support for higher resolutions.
Background audio can also be extended for up to 10 seconds, thanks to Adobe’s AI audio generation technology, though spoken dialogue can’t be generated.
At its MAX conference on Monday, Adobe also announced that its GenStudio for Marketing Performance app, designed to help businesses manage the influx of AI-generated content, is now generally available.
Source:: Computer World
Adobe’s GenStudio content supply chain platform is now generally available, with the ability to publish content directly to social media channels such as Instagram, Snap and TikTok coming soon.
Adobe launched GenStudio for Performance Marketing — as the standalone GenStudio application is now called — in preview at Adobe MAX conference in 2023. At this year’s MAX event it made a slight change in branding: GenStudio now refers to both the GenStudio for Performance Marketing app and the various Adobe applications it integrates with, such as Adobe Experience Manager, Adobe Express, and Workfront.
Adobe has been quick to integrate its Firefly generative AI models across Creative Cloud apps such as Photoshop and Illustrator, enabling designers to increase their output significantly, the company says. (IDC analysts also predict genAI will boost marketing team productivity by 40% in the next five years.)
The aim of GenStudio for Performance Marketing is to help marketers access and use the AI-generated content created within their organization while respecting brand guidelines and legal compliance policies.
“The challenge facing most brands out there is that they have an inefficient content supply chain, where bottlenecks appear in areas like planning, content development and measurement,” said Varun Parmar, general manager for GenStudio at Adobe, in a news briefing. This is where GenStudio for Performance Marketing can help, he said, providing a “seamless way for brands and agencies to deliver on-brand and personalized content that is compliant with brand standards.”
GenStudio for Performance Marketing performs several functions. First, it serves as a content repository where users can access pre-approved assets such as images, logos, and videos for use in the creation of marketing content. This could be anything from display ads to banners and emails. To enable reuse of content across campaigns, GenStudio for Performance Marketing integrates with Adobe Experience Manager Assets, Adobe’s digital asset management app.
Users can also edit and adapt existing assets from the app using the Firefly AI models. This could mean creating variations of email ads tailored to a specific geographic region, for instance.
Those models will soon include new video capabilities, including text-to-video and image-to-video, now available as beta versions.
In GenStudio for Performance Marketing, an AI-powered “brand check” feature can automatically inspect assets before they are used in marketing campaigns, comparing with pre-defined templates and alerting marketing and design teams where content may be out of step with a firm’s brand compliance guidelines. Here, each asset is given a score out of 100, with detailed recommendations for changes: an email headline that’s too lengthy, for example, or innappropriate tone of voice. An integration with Adobe’s Workfront also enables automated “multi-step review workflows,” to provide additional oversight of the approval process
Adobe also plans to let users publish content directly from GenStudio for Performance Marketing to social media channels from the likes of Meta, TikTok and Snap, as well as display ad campaigns with Google’s Campaign Manager 360, Amazon Ads and Microsoft Advertising. This campaign activation feature is “coming soon,” Adobe said, without providing further details. It will also be possible for customers to publish content via their own email and web channels via Adobe Journey Optimizer in future, Adobe said.
Finally, GenStudio for Performance Marketing will provide analytics on the performance of content that’s live on platforms owned by Meta (such as click-through rate, cost per click and spend), with integrations with others such as Microsoft Advertising, Snap and TikTok also available “soon.”
“All companies have to ramp up their genAI knowledge and its impact on brand content/assets,” said Jessica Liu, principal analyst at Forrester.
“Solutions like GenStudio present compelling opportunities for companies to alter their creative development and production process — such as creating more content, accelerating workflows, streamlining workflows, or shifting workforce skillsets.”
Adobe hasn’t published a list price for GenStudio and GenStudio for Performance Marketing. A company representative said, “As this is enterprise software, there isn’t a one size fits all pricing as it’s based on the customer need/requirement.”
Source:: Computer World
The European Space Agency (ESA) has signed a €119mn contract with Italian scaleup D-Orbit for its first in-orbit servicing mission, RISE. Scheduled for launch in 2028, RISE will attempt to rendezvous with, maneuver, and detach from an ESA satellite in geostationary orbit. Then it will embark on an 8-year mission, visiting several other satellites and giving them a new lease on life. RISE, which is about the size of a minivan, will be like a car mechanic, but for aging spacecraft. It will refuel them, repair them, relocate them to a different orbit, and even attach them with a module…
This story continues at The Next Web
Source:: The Next Web
By Nick Godt
Volkswagen plans to bring eight new affordable EVs to market by 2027.
Source:: Digital Trends
By Nick Godt
Juiced Bikes is offering 20% off on all its products amid mounting signs that the e-bike maker is going bankrupt.
Source:: Digital Trends
Click Here to View the Upcoming Event Calendar