Microsoft has revived a classic text editor from 1991

Home » Archive by Category "Technology"

When MS-DOS 5.0 was released in 1991, one of the big innovations was the MS-DOS Editor, a classic text editor that quickly became popular with users. Now, Microsoft has developed a new version of MS-DOS Editor called Edit, according to Ars Technica.

Compared to the original, Edit offers a number of improvements, including support for Unicode. In addition, the 300-kilobyte limit has been removed, meaning users can work with gigabyte-sized files if they want.

Edit was written in the Rust programming language and is based on open-source code. And it doesn’t require Windows to run; the text editor works just as well on macOS or Linux.

If you want to try Edit, it can be downloaded from Github.

Source:: Computer World

Google launches new genAI model for robots

Home » Archive by Category "Technology"

Google subsidiary Deepmind has unveiled Gemini Robotics On-Device, a new version of the Gemini AI model meant to be used in robots and work without an internet connection. The new model reportedly supports natural language, making it easy to control the robot’s movements.

In terms of performance, Gemini Robotics On-Device performs almost as well as the connected Gemini Robotics, Techcrunch reports.

Developers interested in working with Gemini Robotics On-Device can download the Gemini Robotics SDK from Github.

Source:: Computer World

Vivo T4 Lite With 90Hz Display, 6000mAh Battery Launched In India At ₹9,999

Home » Archive by Category "Technology"

By Deepti Pathak Vivo has launched the T4 Lite 5G in India, adding strength to the company’s budget T-series…
The post Vivo T4 Lite With 90Hz Display, 6000mAh Battery Launched In India At ₹9,999 appeared first on Fossbytes.

Source:: Fossbytes

Garena Free Fire Max Redeem Codes for June 25

Home » Archive by Category "Technology"

By Hisan Kidwai Garena Free Fire Max is one of the most popular games on the planet, and for…
The post Garena Free Fire Max Redeem Codes for June 25 appeared first on Fossbytes.

Source:: Fossbytes

What are Gemini, Claude, and Meta doing with our data?

Home » Archive by Category "Technology"

New research released Tuesday on the data collection and sharing practices of leading large language models reveals that organizations such as Meta, Google, and Microsoft are collecting sensitive data and sharing it with unknown third parties.

And according to the findings from Incogni, a personal data removal services and data privacy company, businesses may face even greater risks than the multitude of individuals who use the various LLMs. It doesn’t help that, it said, “every analyzed privacy policy required college-level reading ability to interpret.”

“Employees frequently use generative AI tools to help draft internal reports or communications, not realizing that this can result in proprietary data becoming part of the model’s training dataset,” a release stated. “This lack of safeguards not only exposes individuals to unwanted data sharing, but could also lead to sensitive business data being reused in future interactions with other users, creating privacy, compliance, and competitive risks.”

Ron Zayas, the CEO of Ironwall by Incogni, the company’s B2B and B2G (business to government) division, said in an interview, “the analogy would be that we spend a lot of time as businesses making sure that our emails are secure, making sure that our machines lock themselves down after a certain period of time, of following SOC 2 protocols, all these things to protect information.” But now, he said, the concern is that “we’ve opened the door, and we have employees feeding information to engines that will process that and use it [perhaps in responses to competitors or foreign governments].”

To evaluate the LLMs, Incogni developed a set of 11 criteria that allowed it to assess the privacy risk in each, and compiled the results to determine each program’s privacy ranking in the areas of training, transparency, and data collection and sharing. From these, it also derived an overall rating.

Key findings in the report revealed that:

Le Chat by Mistral AI is the “least privacy invasive platform, with ChatGPT and Grok following closely behind. These platforms performed the best when it comes to how transparent they are on how they use and collect data, and how easy it is to opt out of having personal data used to train underlying models.”

LLM platforms developed by the biggest tech companies turned out to be the most privacy-invasive, the report said, with Meta AI (Meta) being the worst, followed by Gemini (Google) and Copilot (Microsoft).

Gemini, DeepSeek, Pi AI, and Meta AI don’t seem to allow users to opt out of having prompts used to train the models.

ChatGPT turned out to be the most transparent about whether prompts will be used for model training, and it had a clear privacy policy.

Grok (xAI) may share photos provided by users with third parties.

Meta.ai “shares names, email addresses and phone numbers with external entities, including research partners and corporate group members.”

Justin St-Maurice, technical counselor at Info-Tech Research Group, said that from a corporate perspective, “training your staff on what not to put into tools like ChatGPT, Gemini, or Meta’s AI is critical.”

He added, “just as people are taught not to post private or sensitive information on social media, they need similar awareness when using generative AI tools. These platforms should be treated as public, not private. Putting Personally Identifiable Information (PII) or proprietary company data into these systems is no different than publishing it on a blog. If you wouldn’t post it on LinkedIn or Twitter, don’t type it into ChatGPT. The good news? You can do a lot with these tools without needing to expose sensitive data.”

According to St-Maurice, “if you’re worried about Meta or Google sharing your data, you should reconsider your overall platform choices; this isn’t really about how LLMs process your data, but how these large corporations handle your data more generally.”

Privacy concerns are important, he said, “but it doesn’t mean organizations should avoid large language models altogether. If you’re hosting models yourself, on-prem or through secure cloud services like Amazon Bedrock, you can ensure that no data is retained by the model.”

St-Maurice pointed out that, in these scenarios, “the LLM functions strictly as a processor, like your laptop’s CPU. It doesn’t ‘remember’ anything you don’t store and pass back into it yourself. Build your systems so that the LLM does the thinking, while you retain control over memory, data storage, and user history. You don’t need OpenAI or Google to unlock the value of LLMs; host your own internal models, and cut out the risk of third-party data exposure entirely.”

What people don’t understand, added Ironwall’s Zayas, “is that all this information is not only being sucked in, it’s being repurposed, it’s being reused. It’s being publicized out there, and it’s going to be used against you.”

Source:: Computer World

European startup’s space capsule ‘lost’ after reentry

Home » Archive by Category "Technology"

By Siôn Geschwindt Communication with a privately funded European space capsule was lost Tuesday shortly after the spacecraft reentered Earth’s atmosphere.  The capsule launched on a SpaceX rocket from Vandenberg Space Force Base in California on Monday. The Exploration Company, which built the spacecraft, described the mission as a “partial success” and a “partial failure.” “The capsule was launched successfully, powered the payloads nominally in orbit, stabilised itself after separation with the launcher, re-entered and re-established communication after blackout,” the Munich-based startup said in a LinkedIn post today.  “But it encountered an issue afterwards, based on our current best knowledge, and we lost…This story continues at The Next Web

Source:: The Next Web

Mosyle’s AccessMule makes employee access a little easier for SMBs

Home » Archive by Category "Technology"

Apple device management vendor Mosyle has introduced AccessMule, an easy-to-use workflow platform designed to address a specific set of small business needs related to granting, managing, auditing, sharing, storing, and removing employee access from company systems.

These protections are particularly important when on-boarding and off-boarding employees. 

To understand why this matters, it’s important to consider that the main source of cybersecurity breaches among all businesses is not hackers per se, but intentional or unintentional actions performed by employees. That human factor is behind 74% of all security breaches, according to 2023 research from Verizon.

Mosyle has its own research to explain the problem. 

Employee access is a time bomb

According to that data:

Around 87% of small and mid-sized businesses (SMBs) say they cannot immediately verify which employees have access to company permissions.

Roughly the same percent of SMBs also fail to promptly revoke employee access when they leave.

Nearly 90% of companies have found former employees still have access to company applications and files, even after they leave the company.

None of these risks are good, of course — particularly in the context of an unravelling consensus around cybersecurity. So, it makes sense for companies to put sufficient protections in place today rather than face attacks in the future. 

Mosyle encountered challenges managing the on/off-boarding process at first. “The decision to build AccessMule was born out of necessity at Mosyle” said the company’s CEO, Alcyr Araujo. “Later, we realized it wasn’t just a gap for our organization, but a fundamental problem that needed to be solved for all SMBs. We’re launching AccessMule today as an independent subsidiary that will empower organizations with a high-quality, secure and efficient access and password management platform at an affordable price.”

What does AccessMule provide?

Mosyle’s wholly-owned service provider will provide a range of tools designed to defend against the consequences of lax employee security. The main focus is to automate those elements of access control that SMBs often fail to manage. That means tools to automate onboarding and offboarding processes, along with controls to assign access based on roles and overall oversight reporting so it is possible to check who has corporate access at any given time. 

Additional features include bult-in password management, safe password sharing, encryption sharing, and support for shared multi-factor authentication (MFA). Role-based access control (RBAC) features grant permissions in bulk, making it easy to assign permissions for new employees based on their role with a single action. All of these tools and services are available via an easy-to-use portal, the company said. The idea is that IT can maintain oversight on device and employee security, helping them better protect their company.

The ever booming Apple enterprise

Mosyle’s is just one of a range of announcements to emerge from across Apple’s enterprise value chain since WWDC. Just last week, Jamf published its own in-depth Apple-focused security report, while open-source device management vendor Fleet recently announced $27 million in new series B funding to help accelerate development of its own open platform for both cloud- and self-hosted device management for organizations of all kinds. Another vendor, Addigy, recently introduced its own new security partnership with CyberFOX. 

It is usual for Apple’s enterprise partners to begin making service announcements subsequent to WWDC. This is usually inspired by Apple’s moves to enhance enterprise support in its products at the event. It is possible that all reputable Apple device management partners have now begun working with the new Apple betas and enterprise features it is building for introduction this fall.

Apple at WWDC introduced a host of new enterprise-focused improvements, including better support for Apple Accounts in the enterprise, improvements in device management, and a significant enhancement in the quality and quantity of device information IT can access from across their fleets. The latter means tech will even be able to audit MAC address, Activation Lock statues, storage, and cellular information, as well as AppleCare coverage. Platform SSO, App management, and device sharing tools were also improved at WWDC.

You can follow me on social media! Join me on BlueSky,  LinkedIn, and Mastodon.

Source:: Computer World

Danish biotech Cellugy wants to replace microplastics in cosmetics

Home » Archive by Category "Technology"

By Siôn Geschwindt Danish biotech Cellugy has raised €8.1mn in EU funding to accelerate production of a biodegradable material designed to replace microplastics in cosmetics. The grant, awarded under the EU’s LIFE Programme for environmental projects, will support the commercialisation of EcoFLEXY, a cellulose-based material for use in personal care products such as creams, gels, and toothpaste.  Cellugy claims EcoFLEXY is the first material of its kind to match the performance of fossil-based carbomers, which are famed for their ability to give cosmetics a smooth, consistent texture and a long shelf life. Currently, carbomers dominate the global cosmetics market despite links to microplastic…This story continues at The Next Web

Source:: The Next Web

How To Download YouTube Videos to Flash Drive Easily & Safely

Home » Archive by Category "Technology"

By Partner Content To watch YouTube videos without buffering and play them on various devices, downloading YouTube videos to…
The post How To Download YouTube Videos to Flash Drive Easily & Safely appeared first on Fossbytes.

Source:: Fossbytes

Garena Free Fire Max Redeem Codes for June 24

Home » Archive by Category "Technology"

By Hisan Kidwai Garena Free Fire Max is one of the most popular games on the planet, and for…
The post Garena Free Fire Max Redeem Codes for June 24 appeared first on Fossbytes.

Source:: Fossbytes

Microsoft’s new genAI model to power agents in Windows 11

Home » Archive by Category "Technology"

Microsoft is laying the groundwork for Windows 11 to morph into a genAI-driven OS.

The company on Monday announced a critical AI technology that will make it possible to run generative AI (genAI) agents on Windows without Internet connectivity.

Microsoft’s small language model, called Mu, is designed to respond to natural language queries within the Windows OS, the company said in a blog post Monday. Mu takes advantage of the neural processing units (NPUs) of Copilot PCs, Vivek Pradeep, vice president and distinguished engineer for Windows Applied Sciences, said in the post.

Three chip makers — Intel, AMD and Qualcomm — provide NPUs in Copilot PCs prebuilt with Windows 11.

Mu already powers an agent that handles queries in the Settings menus in a preview version of Windows 11 available to early adopters with Copilot+ PCs. The feature is available in the Windows 11 preview version 26200.5651 that shipped  June 13. 

The model provides a better understanding and context of queries, and “has been designed to operate efficiently, delivering high performance while running locally,” Pradeep wrote.

Microsoft is aggressively pushing genAI features into the core of Windows 11 and Microsoft 365. The company introduced a new developer stack called Windows ML 2.0 last month for developers to make AI features accessible in software applications.

The company is also developing feature- or application-specific AI models for Microsoft 365 applications.

The 330-million parameter Mu model is designed to reduce AI computing cycles so it can run locally on Windows 11 PCs.  Laptops have limited hardware and battery life and need a cloud service for AI.

“This involved adjusting model architecture and parameter shapes to better fit the hardware’s parallelism and memory limits,” Pradeep wrote.

The model also generates high-quality responses with a better understanding of queries. Microsoft fine-tuned a custom Mu model for the Settings menu that could respond to ambiguous user queries on system settings. For example, the model can handle queries that do not specify whether to raise brightness on a main or secondary monitor.

The Mu encoder-decoder model breaks down large queries into a more compact representation of information, which is then used to generate responses. That’s different from large language models (LLMs), which are only decoder models and require all of the text to generate responses.

“By separating the input tokens from output tokens, Mu’s one-time encoding greatly reduces computation and memory overhead,” Pradeep said.

The encoder–decoder approach was significantly faster than LLMs such as Microsoft’s Phi-3.5, which is a decoder-only model. “When comparing Mu to a similarly fine-tuned Phi-3.5-mini, we found that Mu is nearly comparable in performance despite being one-tenth of the size,” Pradeep said.

Those gains are crucial for on-device and real-time applications. “Managing the extensive array of Windows settings posed its own challenges, particularly with overlapping functionalities,” Pradeep said.

The response time was under 500 milliseconds, which aligned with “goals for a responsive and reliable agent in Settings that scaled to hundreds of settings,” Pradeep said.

Microsoft has many genAI technologies that include OpenAI’s ChatGPT and its latest homegrown Phi 4 model, which can generate images, video and text.

Source:: Computer World

Has Apple become addicted to ‘No’?

Home » Archive by Category "Technology"

In a world loaded with existential challenge, it should not surprise anyone that Apple faces its own crisis. It should do what any cornered animal will always do and fight hard and dirty to regain freedom. That’s why it’s of concern to once again learn this weekend that Apple is “considering” acquisitions in the generative AI (genAI) space, because by this time in the fight, I want that chatter to be about acquisitions that have been made. 

Look, anyone can consider making a purchase and then come up with a dozen reasons not to go through with it. That’s not hard at all, it’s the inevitable articulation of small-C conservatism, which tends to favor stasis over change. My concern is that Apple’s own growth mindset might have been replaced by a more conservative approach, which means that the company becomes really good at finding reasons not to do things, and less good at identifying when it really should do something.

No can’t be the default

Apple’s history is packed with conflict between good ideas the company rejected and brilliant ideas it chose to move forward with. It is arguable that some of the ideas the company has looked at historically are only now becoming viable devices. (I’m thinking of the speculated HomePod as an idea of that kind.) Apple executives have frequently discussed how the company is just as proud of the things it doesn’t do as of those it does. It’s a company instinctively good at saying “No” — until it finds a good reason to say “Yes.”

The problem is that when it comes to genAI, it still feels like there’s a lot of creative mileage to be had from injecting some creative chaos into the R&D crib. To achieve that, it seems necessary that Apple find the spleen to take a few risks on the M&A journey. 

The company can’t simply wander down to the genAI development shops and find reasons not to purchase things; it needs to pick up all the shiniest things it comes across, using whatever financial muscle it takes to ensure they end up in Apple’s hold rather than elsewhere. 

Why must it do this? Because genAI isn’t finished yet. 

The genAI evolution continues

Sure, Apple’s widely disclosed challenges with Siri mean it is motivated to try new approaches to push that project ahead, but the truth is that no one — not even OpenAI — really has genAI that is anything other than a hint of what this tech is likely to be able to accomplish in a decade or two. We are still early in the AI race, and that means today’s winners can still lose and those at the back of the pack have an opportunity to get ahead. 

So, it makes sense for Apple to take a few expensive risks, rather than staying inside the safe zone. Does Perplexity have a few tools that could boost Apple Intelligence? Then grab them. Are there others in AI with tools that could help make Siri smart and hardware products sing? 

Bring them in. Take risks. Get hungry, be foolish. Make it happen.

It is also worth thinking about retention at this point. 

Keep them keen

Several pieces by Mark Gurman in recent years tell us that in many cases, people Apple has hired on the purchase of their companies have subsequently jumped ship, as they did not find their happiness. If that is the case, that’s a problem that needs to be fixed; it suggests at least some of the assumptions the company has concerning how it works with its employees must be challenged, and new ways found to ensure acquired staffers actually want to stick around. 

Apple has tried stock options to boost retention. That’s not enough. Money helps, but as Maslow says, agency and empowerment are more important. Steve Jobs understood this, saying during his last D: All Things Digital interview in 2010, “If you want to hire great people and have them stay working for you, [you] have to let them make a lot of decisions and you have to be run by ideas, not hierarchy…. The best ideas have to win — otherwise good people don’t want to stay.” 

I’m not saying Apple has become hierarchical, though I look with suspicion at work-from-home mandates and opposition to employee unionization as hints that hierarchy exists in some parts of the company. What I am saying is that if the old M.O. isn’t working, and if the important new recruits the company needs to tackle genAI don’t want to stick around, then something’s got to change. And if that means a lot more collaboration and empowerment and a few internal changes in approach, that’s a small price to pay in contrast to the global opportunity to lead the AI-driven tech future on a planet seemingly owned by billionaires and technocrats.

Sometimes you got to play your hunches — how else are you going to find what you love?

You can follow me on social media! Join me on BlueSky,  LinkedIn, and Mastodon.

Source:: Computer World

OnePlus Bullets Wireless Z3 Review: High on Bass, Low on Price

Home » Archive by Category "Technology"

By Hisan Kidwai The neckband category, which was once the norm for wireless earbuds, has become a rarity in…
The post OnePlus Bullets Wireless Z3 Review: High on Bass, Low on Price appeared first on Fossbytes.

Source:: Fossbytes

‘Space umbrella’ returns first striking images of Earth’s forests

Home » Archive by Category "Technology"

By Siôn Geschwindt A giant umbrella-like satellite fitted with European tech has revealed its first images of Earth’s surface. The probe, called “Biomass,” was built by a host of aerospace giants and startups for the European Space Agency (ESA). It launched in April on a Vega-C rocket from Europe’s spaceport in Kourou, French Guiana. European Astrotech, a UK-based startup, was responsible for fuelling the satellite ahead of takeoff. Biomass’ mission is to capture the most detailed measurements of forest carbon ever recorded from space. To get the job done, it’s been equipped with the first-ever P-band radar to enter orbit. It aims to…This story continues at The Next Web

Source:: The Next Web

Garena Free Fire Max Redeem Codes for June 23

Home » Archive by Category "Technology"

By Hisan Kidwai Garena Free Fire Max is one of the most popular games on the planet, and for…
The post Garena Free Fire Max Redeem Codes for June 23 appeared first on Fossbytes.

Source:: Fossbytes

Garena Free Fire Max Redeem Codes for June 22

Home » Archive by Category "Technology"

By Hisan Kidwai Garena Free Fire Max is one of the most popular games on the planet, and for…
The post Garena Free Fire Max Redeem Codes for June 22 appeared first on Fossbytes.

Source:: Fossbytes

Blox Fruits Codes (June 2025)

Home » Archive by Category "Technology"

By Hisan Kidwai Blox Fruits is one of the most popular games on Roblox, and for good reason. Inspired…
The post Blox Fruits Codes (June 2025) appeared first on Fossbytes.

Source:: Fossbytes

Free Monopoly Go Dice Links (June 2025)

Home » Archive by Category "Technology"

By Hisan Kidwai Monopoly Go is the mobile version of the popular board game we’ve all played at least…
The post Free Monopoly Go Dice Links (June 2025) appeared first on Fossbytes.

Source:: Fossbytes

GenAI — friend or foe?

Home » Archive by Category "Technology"

Generative AI (genAI) could help people live longer and healthier lives, transform education, solve climate change, help protect endangered animals, speed up disaster response, and make work more creative, all while making daily life safer and more humane for billions worldwide. 

Or the technology could lead to massive job losses, boost cybercrime, empower rogue states, arm terrorists, enable scams, spread deepfakes and election manipulation, end democracy, and possibly lead to human extinction. 

Well, humanity? What’s it going to be?

California’s dreamin’

Last year, the California State Legislature passed a bill that would have required companies based in the state to perform expensive safety tests for large genAI models and also build in “kill switches” that could stop the technology from going rogue. 

If this kind of thing doesn’t sound like a job for state government, consider that California’s genAI companies include OpenAI, Google, Meta, Apple, Nvidia, Salesforce, Oracle, Anthropic, Anduril, Tesla, and Intel. 

The biggest genAI company outside California is Amazon; it’s based in Washington state, but has its AI division in California.

Anyway, California Gov. Gavin Newsom vetoed the bill. Instead, he asked AI experts, including Fei-Fei Li of Stanford, to recommend a policy less onerous to the industry. The resulting Joint California Policy Working Group on AI Frontier Models released a 52-page report this past week. 

The report focused on transparency, rather than testing mandates, as the solution to preventing genAI harms. The recommendation also included third-party risk assessments, whistleblower protections, and flexible rules based on real-world risk, much of which was also in the original bill. 

It’s unclear whether the legislature will incorporate the recommendations into a new bill. In general, the legislators have reacted favorably to the report, but AI companies have expressed concern about the transparency part, fearing they’ll have to reveal their secrets to competitors. 

Two kinds of risk

There are three fundamental ways that emerging AI systems could create problems and even catastrophes to people: 

1. Misalignment. Some experts fear that misaligned AI, acting creatively and automatically, will operate in its own self-interest and against the interests of people. Research and media reports show that advanced AI systems can lie, cheat, and engage in deceptive behavior. GenAI models have been caught faking compliance, hiding their true intentions, and even strategically misleading their human overseers when it serves their goals; that was seen in experiments with models like Anthropic’s Claude and Meta’s CICERO, which lied and betrayed allies in the game Diplomacy despite being trained for honesty.

2. Misuse. Malicious people, organizations, and governments could use genAI tools to launch highly effective cyberattacks, create convincing deepfakes, manipulate public opinion, automate large-scale surveillance, and control autonomous weapons or vehicles for destructive purposes. These capabilities could enable mass disruption, undermine trust, destabilize societies, and threaten lives on an unprecedented scale.

3. The collective acting on bad incentives. AI risk isn’t a simple story of rogue algorithms or evil hackers. Harms could result from collective self-interest combined with incompetence or regulatory failure. For example, when genAI-driven machines replace human workers, it’s not just the tech companies chasing efficiency. It’s also the policymakers who didn’t adopt labor laws, the business leaders who made the call, and consumers demanding ever-cheaper products. 

What’s interesting about this list of ways AI could cause harm is that all are nearly certain to happen. We know that because it’s already happening at scale, and the only certain change coming in the future is the rapidly growing power of AI. 

So, how shall we proceed? 

We can all agree that genAI is a powerful tool that is becoming more capable all the time. We want to maximize its benefit to people and minimize its threat. 

So, here’s what I believe is the question of the decade: What do we do to promote this outcome? By “we,” I mean the technology professionals, buyers, leaders, and thought leaders reading this column. 

What should we be doing, advocating, supporting, or opposing? 

I asked Andrew Rogoyski, director of Innovation and Partnerships at the UK’s Surrey Institute for People-Centred Artificial Intelligence, that question. Rogoyski works full-time to maximize AI’s benefits and minimize its harms. 

One concern with genAI systems, according to Rogoyski, is that we’re entering a realm where nobody knows how they work — even when they benefit people. As AI gets more capable, “new products appear, new materials, new medicines, we cure cancer. But actually, we won’t have any idea how it’s done,” he said. 

“One of the challenges is these decisions are being made by a few companies and a few individuals within those companies,” he said. Decisions made by a few people “will have enormous impact on…global society as a whole. And that doesn’t feel right.” He pointed out that companies like Amazon, OpenAI, and Google have far more money to devote to AI than entire governments. 

Rogoyski pointed out the conundrum exposed by solutions like the one California is trying to arrive at. At the core of the California Policy Working Group’s proposal is transparency, treating AI functionality as a kind of open-source project. On the one hand, outside experts can help flag dangers. On the other, transparency opens the technology to malicious actors. He gave the example of AI designed for biotech, something designed to engineer life-saving drugs. In the wrong hands, that same tool might be used to engineer a catastrophic bio-weapon.

According to Rogoyski, the solution won’t be found solely in some grand legislation or the spontaneous emergence of ethics in the hearts of Silicon Valley billionaires. The solution will involve broad-scale collective action by just about everyone.

It’s up to us

At the grass-roots level, we need to advocate the practice of basing our purchasing, use, and investment in AI systems that are serious about and capable with ethical practices, strong safety policies, and deep concern about alignment. 

We all need to favor companies that “do the right thing in the sense of sharing information about how they trained [their AI], what measures they put in place to stop it misbehaving and so on,” said Rogoyski.

Beyond that, we need stronger regulation based more on expert input and less on Silicon Valley businesses’ trillion-dollar aspirations. We need broad cooperation between companies and universities. 

We also need to support, in any way we can, the application of AI to our most pressing problems, including medicine, energy, climate change, income inequality, and others.

Rogoyski offers general advice for anyone worried about losing their job to AI: Look to the young. 

While older professionals might look at AI and feel threatened by it, younger people often see opportunity. “If you talk to some young creative who’s just gone to college [and] come out with a [degree in] photography, graphics, whatever it is,” he said, “They’re tremendously excited about these tools because they’re now able to do things that might have taken a $10 million budget.”

In other words, look for opportunities in AI to accelerate, enhance, and empower your own work.

And that’s generally the mindset we should all embrace: We are not powerless. We are powerful. AI is here to stay, and it’s up to all of us to make it work better for ourselves, our communities, our nations, and our world. 

Source:: Computer World

Apple Pay is going to get faster and more reliable

Home » Archive by Category "Technology"

Contactless payments such as Apple Pay and sustainability in inventory control are going to get much easier with an upcoming update to the Near Field Communications (NFC) standard that will make devices connect more swiftly and support the Digital Product Passport (NDPP) specification.

The first problems the new standard solves are range and reliability. At present, standard NFC supports a range of up to 0.2 inches and the connections aren’t always robust. What that means to most of us is the need to wriggle your iPhone or Apple Watch around a little to gain connection to the payment terminal. The improved NFC increases that range to to about 3/4 of an inch for all devices and makes the connection a little more resilient; the standard is also a little faster, which means once you authorize a payment it will take place faster than it already does.

Faster connections, easier payments, and more

That range and reliability improvements aren’t just for mobile payments, of course. If you use your iPhone as a car key or have mobile transit cards in your Apple Wallet, you should get a much better experience when opening doors or catching public transit. The NFC update also comes as Apple prepares to introduce expanded support for digital IDs and in-store payments with iOS 16. The latter is interesting because while the NFC Forum didn’t say anything about it, the update does support more complex transactions over NFC — that should make it easier to use supermarket loyalty cards at the same time as Apple Pay in a single tap. The Forum calls these, “multi-purpose tap use cases where a single tap unlocks multiple functions.”

NFC Release 15 is also expected to advance new and exciting use cases, such as using your mobile phone as a payment terminal, championing sustainability and optimizing NFC use across a variety of sectors, including automotive, transit and access control. There is also support for a new feature that has been designed to meet emerging sustainability regulations: NFC Digital Product Passport (NDPP)

What is NDPP and is it safe?

Aimed at manufacturers, NDPP is a framework to allow a single NFC tag embedded in a product to store and transmit both standard and extended Digital Product Passport (DPP) data using NFC. That data includes information such as a product’s composition, origin, environmental, lifestyle, and recycling details. Most hardware manufacturers will need to begin capturing this kind of information under an incoming EU law known as the Ecodesign for Sustainable Products Regulation (ESPR). The information is meant to be made available to customers, business users and recyclers and designed to boost transparency and sustainability. It will be interesting, for example, to use DPP inside future iPhones to determine where the device and its components originate – and it might be fun to explore refurbished devices to see whether components installed to return them to use have been used in different devices previously. 

That said, this kind of unique device information does sound like the kind of data that could be abused for device fingerprinting and user tracking; is there a risk of this?

Age of consent

I contacted Mike McCamon, the organization’s executive director, for more background on NDPP. I was particularly curious about the NDPP specification — could it be abused for digital device fingerprinting? That’s unlikely, said McCarmon, in part because of the nature of NFC design, which has been developed from day one to require active consent from the user.

“Security and privacy are foundational aspects of our work at the NFC Forum,” he said. “The NFC Digital Product Passport (NDPP) Specification can be thought more of a container of content than being fully descriptive of what content is included.” The support should extend use of NFC in different ways, such as in supply chain management, inventory control, or effective recycling strategies, all of which may benefit from the kind of information NDPP provides.

“And of course, even with our new extended range…, NFC Forum-capable products must be in the closest of proximity to be read. This is in addition to most NFC functionality today on mobile devices and wearables, which is only accessible following a direct user action – like a double-tap for instance. For these and the reasons above, we believe NFC Forum standards will provide the most capable, intuitive, and secure data carrier of DPP data for the market.”

For the rest of us

Millions of people use NFC every day for payments, car and hotel rooms, or even travel. That means the new NFC standard will deliver measurable benefits to consumers because it should work better than it does now. And for enterprises, the extended support for Multi-Purpose Taps should make for a variety of product and service development possibilities, particularly as Apple opens up access to NFC on its devices.

The NFC Release 15 is currently available to high-level NFC Forum member companies, including Apple, Google, Sony, and Huawei, who can now implement the improvements in their own products in advance of a public release as new iPhones appear in fall.

You can follow me on social media! Join me on BlueSky,  LinkedIn, and Mastodon.

Source:: Computer World

REGISTER NOW FOR YOUR PASS
 
To ensure attendees get the full benefit of an intimate technology expo,
we are only offering a limited number of passes.
 
Get My Pass Now!