Kabul, Afganistan
info@Lapojap.com

Category: Technology

Technology

Meta to replace ‘biased’ fact-checkers with moderation by users

Getty Images Meta is abandoning the use of independent fact checkers on Facebook and Instagram, replacing them with X-style “community notes” where commenting on the accuracy of posts is left to users. In a video posted alongside a blog post by the company on Tuesday, chief executive Mark Zuckerberg said third-party moderators were “too politically biased” and it was “time to get back to our roots around free expression”. The move comes as Zuckerberg and other tech executives seek to improve relations with US President-elect Donald Trump before he takes office later this month. Trump and his Republican allies have criticised Meta for its fact-checking policy, calling it censorship of right-wing voices. Speaking after the changes were announced, Trump told a news conference he was impressed by Zuckerberg’s decision and that Meta had “come a long way”. Asked whether Zuckerberg was “directly responding” to threats Trump had made to him in the past, the incoming US president responded: “Probably”. Joel Kaplan, a prominent Republican who is replacing Sir Nick Clegg as Meta’s global affairs chief, wrote that the company’s reliance on independent moderators was “well-intentioned” but had too often resulted in censoring. Campaigners against hate speech online reacted with dismay to the change – and suggested it was really motivated by getting on the right side of Trump. “Zuckerberg’s announcement is a blatant attempt to cozy up to the incoming Trump administration – with harmful implications”, said Ava Lee, from Global Witness, a campaign group which describes itself as seeking to hold big tech to account. “Claiming to avoid “censorship” is a political move to avoid taking responsibility for hate and disinformation that platforms encourage and facilitate,” she added. Emulating X Meta’s current fact checking programme, introduced in 2016, refers posts that appear to be false or misleading to independent organisations to assess their credibility. Posts flagged as inaccurate can have labels attached to them offering viewers more information, and be moved lower in users’ feeds. That will now be replaced “in the US first” by community notes. Meta says it has “no immediate plans” to get rid of its third-party fact checkers in the UK or the EU. The new community notes system has been copied from X, which introduced it after being bought and renamed by Elon Musk. It involves people of different viewpoints agreeing on notes which add context or clarifications to controversial posts. “This is cool,” he said of Meta’s adoption of a similar mechanism. After concerns were raised around self-harm and depressive content, Meta clarified that there would be “no change to how we treat content that encourages suicide, self-injury, and eating disorders”. Fact-checking organisation Full Fact – which participates in Facebook’s program for verifying posts in Europe – said it “refutes allegations of bias” made against its profession. The body’s chief executive, Chris Morris, described the change as a “disappointing and a backwards step that risks a chilling effect around the world.” ‘Facebook jail’ Alongside content moderators, fact checkers sometimes describe themselves as the internet’s emergency services. But Meta bosses have concluded they have been intervening too much. “Too much harmless content gets censored, too many people find themselves wrongly locked up in “Facebook jail,” and we are often too slow to respond when they do,” wrote Joel Kaplan on Tuesday. But Meta does appear to acknowledge there is some risk involved – Zuckerberg said in his video the changes would mean “a trade off”. “It means we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down,” he said. The approach is also at odds with recent regulation in both the UK and Europe, where big tech firms are being forced to take more responsibility for the content they carry or face steep penalties. So it’s perhaps not surprising that Meta’s move away from this line of supervision is US-only, for now at least. ‘A radical swing’ Meta’s blog post said it would also “undo the mission creep” of rules and policies. “It’s not right that things can be said on TV or the floor of Congress, but not on our platforms,” it added. It comes as technology firms and their executives prepare for Trump’s inauguration on 20 January. Several CEOs have publicly congratulated Trump on his return to office, while others have travelled to Trump’s Florida estate Mar-Lago to meet with the incoming president, including Zuckerberg in November. Meta has also donated $1m to an inauguration fund for Trump. “The recent elections also feel like a cultural tipping point towards, once again, prioritising free speech,” said Zuckerberg in Tuesday’s video. Meta notified Trump’s team of the policy change before the announcement, the New York Times reported. Kaplan replacing Sir Nick – a former Liberal Democrat deputy prime minister – as the company’s president of global affairs has also been interpreted as a signal of the firm’s shifting approach to moderation and its changing political priorities. The company also announced on Monday that Dana White, a close Trump ally and president of the Ultimate Fighting Championship, would join its board of directors. Kate Klonick, associate professor of law at St John’s University Law School, said the changes reflected a trend “that has seemed inevitable over the last few years, especially since Musk’s takeover of X”. “The private governance of speech on these platforms has increasingly become a point of politics,” she told BBC News. Where companies have previously faced pressure to build trust and safety mechanisms to deal with issues like harassment, hate speech, and disinformation, a “radical swing back in the opposite direction” is now underway, she added. Source link

OpenAI boss Sam Altman denies sexual abuse allegations made by sister

ChatGPT creator Sam Altman’s sister, Ann Altman, has filed a lawsuit alleging that he regularly sexually abused her between 1997 and 2006. The lawsuit, which was filed on 6 January in a US District Court in the Eastern District of Missouri, alleges that the abuse started when she was three and Mr Altman was 12. Mr Altman, who is the chief executive of OpenAI, the firm behind artificial intelligence (AI) software ChatGPT, denied the claims in a joint statement on X with his mother and two brothers. “All of these claims are utterly untrue,” the statement said. “Caring for a family member who faces mental health challenges is incredibly difficult,” it added. Warning: this story contains details some may find distressing. Mr Altman said he gives his sister monthly financial support, pays her bills and rent, and offered to buy her a house, but that she “continues to demand more money from us”. But Ms Altman claims her brother “groomed and manipulated” her and performed sex acts on her over several years, including “rape, sexual assault, molestation, sodomy, and battery”, according to a court filing seen by the BBC. Ms Altman said she sustained “great bodily injury”, severe emotional distress and depression. She added that she had incurred numerous medical bills because of medical and mental health treatment for her injuries. In the UK, victims or alleged victims of sexual offences have a right to lifelong anonymity. The UK legislation which creates this right does not apply to people in the US. “Over the years, we’ve tried in many ways to support Annie and help her find stability,” Mr Altman said, adding that he had taken “professional advice” on how to “be supportive” without “enabling harmful behaviours”. “This situation causes immense pain to our entire family,” the statement added. The lawsuit added the last instance of the alleged abuse took place when Mr Altman was an adult and she was still a minor. Ms Altman has previously made similar allegations against her brother on social media platforms such as X. Billionaire Mr Altman, who married his partner Oliver Mulherin in 2024, is one of the technology world’s most high-profile figures. In late 2022, OpenAI launched the ChatGPT generative AI chatbot. It has become widely used globally for its ability to create computer code, emails, recipes, and many other forms of text – as well as images – based on prompts. In 2024, Mr Altman returned as OpenAI’s boss just days after he was fired by the board, surviving an attempt at a boardroom coup. Additional reporting by Lily Jamali and Faarea Masud If you’ve been affected by the issues in this story, help and support is available via the BBC Action Line. Source link

Huge problems with axing fact-checkers, Meta oversight board says

Getty Images Helle Thorning-Schmidt, who is now the co-chair of Meta’s oversight board, is the former Prime Minister of Denmark. The co-chair of the independent body that reviews Facebook and Instagram has said she is “very concerned” about sweeping changes to what content is allowed on the platforms and how its moderated. Helle Thorning-Schmidt, from Meta’s oversight board, told the BBC she welcomed aspects of the shake-up, which will see users decide about the accuracy of posts via X-style “community notes”. However, speaking on BBC Radio 4’s Today programme, she said there were “huge problems” with what had been announced, including the potential impact on the LGBTQ+ community, as well as gender and trans rights. “We are seeing many instances where hate speech can lead to real-life harm, so we will be watching that space very carefully,” she added. In a video posted alongside a blog post by the company on Tuesday, Meta chief executive Mark Zuckerberg said the decision was motivated by “getting back to our roots around free expression”. He said third-party fact-checkers currently used by the firm were “too politically biased”, meaning too many users were being “censored”. However, the journalist Maria Ressa – who won the Nobel Peace Prize in 2021 – said the suggestion the change would promote free speech was “completely wrong”, telling the AFP news agency the decision meant there were “extremely dangerous times ahead” for social media users and democracy. “Only if you’re profit driven can you claim that; only if you want power and money can you claim that”, said Ms Ressa, who co-founded the Rappler news site in the Philippines. ‘Kiss up to Trump’ The decision has prompted questions about the survival of the oversight board Ms Thorning-Schmidt co-chairs. It is funded by Meta and was created by then president of global affairs, Sir Nick Clegg, who announced he was leaving the company less than a week ago. Ms Thorning-Schmidt – a former prime minister of Denmark – insisted it was needed more than ever. “That’s why it is good we have an oversight board that can discuss this in a transparent way with Meta”, she said. Some have suggested Sir Nick’s departure – and the fact checking changes – are an attempt to get closer to the incoming Trump administration, and catch up with the access and influence enjoyed by another tech titan, Elon Musk. The tech journalist and author Kara Swisher told the BBC it was “the most cynical move” she had seen Mr Zuckerberg make in the “many years” she had been reporting on him. “Facebook does whatever is in its self-interest”, she said. “He wants to kiss up to Donald Trump, and catch up with Elon Musk in that act.” Is Mark Zuckerberg ‘cosying up’ to Donald Trump? Emma Barnett speaks with Helle Thorning-Schmidt on the Today programme While campaigners against hate speech online reacted with dismay to the change, some advocates of free speech have welcomed the news. The US free speech group Fire said: “Meta’s announcement shows the marketplace of ideas in action. Its users want a social media platform that doesn’t suppress political content or use top-down fact-checkers. “These changes will hopefully result in less arbitrary moderation decisions and freer speech on Meta’s platforms.” Speaking after the changes were announced, Trump told a news conference he was impressed by Mr Zuckerberg’s decision and that Meta had “come a long way”. Asked whether Mr Zuckerberg was “directly responding” to threats Trump had made to him in the past, the incoming US president responded: “Probably.” Advertiser exodus Mr Zuckerberg acknowledged on Tuesday there was some risk for the company in the change of strategy. “It means we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down,” he said in his video message. X’s move to a more hands-off approach to moderating content has contributed to a major fall-out with advertisers. Jasmine Enberg, an analyst at Insider Intelligence, said that was a risk for Meta too. “Meta’s massive size and powerhouse ad platform insulate it somewhat from an X-like user and advertiser exodus,” she told the BBC. “But brand safety remains a key factor in determining where advertisers spend their budgets – any major drop in engagement could hurt Meta’s ad business, given the intense competition for users and ad dollars.” Source link

Political content on Instagram and Threads ramped up

Threads and Instagram users will be shown more political content from people they do not follow, parent company Meta has announced. The firm says its part of its reorientation towards “free expression” – a move that saw it ditch fact checkers on Tuesday. The change will be introduced in the US this week before being expanded globally soon after. It represents a U-turn from the head of the two platforms, Adam Mosseri, who had previously said he was not in favour of them promoting posts about politics and news. Explaining the change, he suggested users had “asked to be shown more” of such content. But Drew Benvie, chief executive of social media consultancy Battenhall, questioned whether that was accurate, saying the attraction of Instagram and Threads is that they were “safe spaces” free of the “turbulent developments” seen on platforms such as X. The real motivation was the “changing political winds” in the US, he said, where Donald Trump will shortly return to the White House. He predicted it could drive people towards rivals such as Bluesky, but said she also worried about the impact on those who stayed on Meta platforms. This week’s changes “will open up the potential for vast amounts of disinformation to spread at speed across a user base of over 2 billion,” he warned. In 2023, Mr Mosseri said Threads and Instagram should focus on “amazing communities” such as “sports, music and fashion.” “Any incremental engagement or revenue they might drive is not at all worth the scrutiny, negativity (let’s be honest), or integrity risks that come along with them,” he wrote in a Threads post at the time. But in a fresh post on the platform he has now explained why that stance was being abandoned, saying it had “proven impractical to draw a red line around what is and is not political content” – and users have asked to be shown more, not less, of it. Mr Mosseri said Instagram – which Meta acquired for $1bn in 2012 – was founded upon the values of creativity and “giving anybody a voice”. “My hope is that this focus on free speech is going to help us do even a bit better along that path,” he said in an Instagram video. There has been considerable criticism of the changes Meta has already announced, with concerns expressed about the impact on minority groups. Some users have also reacted to these latest changes on Threads and Instagram with dismay. “Well, time to delete the Threads app. It was nice while it lasted,” said one Threads user responding to Mr Mosseri’s posts. On Instagram – where Mr Mosseri said accounts focused on politics now “don’t have to worry about becoming non-recommendable” to other users – some users praised the move as “a good step towards the freedom on the platform”. Many have also, however, expressed concern about the effect that increasing content recommendations about social issues and politics could have on amplifying misinformation and hate speech. Brooke Erin Duffy, an associate professor in communication at Cornell University, said there would be “winners and losers” of Meta’s content moderation changes. “Marginalised creators, including women, people of colour, and the LGBTQ+ community are likely to face increased harms with fewer mechanisms of recourse,” she told BBC News. “At the same time, we may see a rise in content created by far-right or ideologically extreme influencers given the relaxed policies on hate speech.” Source link

Apple board pushes against diversity rollback call

Apple’s board has asked its investors to vote against a proposal to end its Diversity, Equity, and Inclusion (DEI) programmes. It comes after a conservative group, the National Center for Public Policy Research (NCPPR), called on the technology giant to abolish its DEI policies, saying they expose firms to “litigation, reputational and financial risks”. Apple’s directors say the NCPPR’s proposal is unnecessary because the company has appropriate checks and balances in place. Other major US firms, including Meta and Amazon, have rolled back DEI programmes ahead of the return to the White House this month of Donald Trump, who has been highly critical of DEI policies. “The proposal is unnecessary as Apple already has a well-established compliance program,” the firm’s filing to investors said. Apple’s board also said the DEI rollback plan “inappropriately seeks to micromanage the Company’s programs and policies by suggesting a specific means of legal compliance.” NCPPR’s proposal is set to be put to a vote by shareholders at Apple’s annual general meeting on 25 February. Conservative groups have threatened to take legal action against major companies over their DEI programmes, saying such policies are at odds with a Supreme Court decision in 2023 against affirmative action at universities. Last week, Facebook owner Meta became the latest US company to roll back its DEI initiatives, joining a growing list of major firms that includes Amazon, Walmart and McDonald’s. In a memo to staff about the decision – which affects, hiring, supplier and training efforts – Meta cited a “shifting legal and policy landscape”. It also referred to the Supreme Court’s affirmative action ruling. Meta’s boss, Mark Zuckerberg, has been moving to reconcile with Trump since his election in November. The firm has donated $1m (£820,000) to the President-elect’s inauguration fund, hired a Republican as his public affairs chief and announced it is getting rid of fact-checkers on Meta’s social media platforms. Mr Zuckerberg is not alone among top executives making such moves in the face of mounting pressure from conservative groups. Source link

Plan to ‘unleash AI’ across UK revealed

Getty Images The government said AI will boost public sector productivity, while also helping teachers and small business owners Artificial intelligence presents a “vast potential” for rejuvenating UK public services, Prime Minister Sir Keir Starmer said on Monday. In a speech setting out the government’s plans to use AI across the UK to boost growth and deliver services more efficiently, Sir Keir said the government had a responsibility to make AI “work for working people”. The AI Opportunities Action Plan is backed by leading tech firms, some of which have committed £14bn towards various projects, creating 13,250 jobs, the government said. But the government faces questions over how much time and money will be needed to make its vision a reality, amid concerns over borrowing costs and the falling value of the pound. The plan includes proposals for growth zones where development will be focused, and suggests the technology will be used to help tackle issues such as potholes. While estimates from the International Monetary Fund (IMF) support the claim that AI could increase productivity, it also says the changes may come gradually. The government tasked AI adviser Matt Clifford with creating a UK action plan for supporting the growth of artificial intelligence and its use in public services. He came back with 50 recommendations which are now being implemented. Among these is for the UK to invest in a new supercomputer to boost computing power – marking a change in strategy after the Labour government ditched the previous government’s plans for a supercomputer at Edinburgh University. Sir Keir said AI “will drive incredible change” in the country and “has the potential to transform the lives of working people”. “We’re going to make AI work for everyone in our country,” he added, saying the “battle for the jobs of tomorrow is happening today”. Sir Keir said the UK would become one of the AI “superpowers” – mirroring former Prime Minister Rishi Sunak’s drive to boost the UK sector so it could rival that of the US and China. At the time, many of Sunak’s proposals were geared towards mitigating future risks of highly powerful AI systems. In October 2023, he said AI could enable faster, easier production of chemical and biological weapons, or be used by terrorist groups to spread disinformation. He added that in a worst-case scenario, society could lose control over AI. His government’s emphasis on “safety” seems largely absent in this new plan – instead focusing on maximising opportunities, growth and innovation. The pivot away from the previous narrative of caution and safety suggests the government has decided the UK should attempt to compete in the AI arms race, currently lead by major global players including the US and China. However, building data centres and boosting the nation’s computing power will not happen overnight. This means the government is unlikely to see the end results of this major project ahead of the next general election – when Labour will have to convince voters that it was still the right decision, at a time when public finances remain stretched. Professor Dame Wendy Hall said the proposals were “ambitious”, but necessary to help the UK keep up with the pace of development. “It’s an ambitious plan but there’s a lot of upfront investment,” she told BBC Radio Four’s Today programme. “It will take some time to see a return on that investment and they’ve got to be in it for the long-term.” How the AI plan could affect you Among the government’s proposals are: AI will be used by the public sector to enable its workers to spend less time doing admin and more time delivering services. Several “AI Growth Zones” around the UK will be created, involving big building projects and new jobs. AI will be fed through cameras around the country to inspect roads and spot potholes that need fixing Teachers and small business owners were highlighted as two groups that could start using AI for things like faster planning and record-keeping. AI is already being used in UK hospitals for important tasks such as diagnosing cancer more quickly and it will continue to be used to support the NHS. The government has also proposed a boost to UK infrastructure as part of the plan, with tech firms committing £14bn towards large data centres or tech hubs. But shadow science secretary Alan Mak said Labour was “delivering analogue government in a digital age”. While the push towards AI is seen as way of cutting down on public spending, Mak accused Labour of undermining this goal with its economic policies. “Labour’s economic mismanagement and uninspiring plan will mean Britain is left behind,” he said. Science and Technology Secretary Peter Kyle told the BBC there was no reason why the UK could not create tech companies on the same scale as Google, Amazon, and Apple. “At the moment, we don’t have any frontier conceptual, cutting-edge companies that are British-owned.” He highlighted DeepMind, which created technology enabling computers to play video and board games, as an example of a former British-born company that was now US-owned. It was founded by three University College London students before its acquisition by Google. Tim Flagg, chief operating officer of UKAI – a trade body representing British AI businesses – said the proposals take a “narrow view” of the sector’s contributors and focus too much on big tech. “AI innovation spans industries, from small enterprises to non-tech sectors, all driving the new industrial revolution,” he said. “It’s time the government recognised this broader definition and tapped into the full potential of AI across the UK.” AI ‘not perfect’ There are continuing questions over the risks of introducing AI systems that can “hallucinate” or make things up, or discriminate against certain groups of people due to bias. Cabinet Office minister Pat McFadden said “we’re only at the foothills of this” and AI was a developing technology. He said a government-developed AI teaching assistant had been used by about 30,000 teachers in England so far.

GPs turn to AI to help with patient workload

Deepali Misra-Sharp Dr Deepali Misra-Sharp uses AI to help take notes This is the fifth feature in a six-part series that is looking at how AI is changing medical research and treatments. The difficulty of getting an appointment with a GP is a familiar gripe in the UK. Even when an appointment is secured, the rising workload faced by doctors means those meetings can be shorter than either the doctor or patient would like. But Dr Deepali Misra-Sharp, a GP partner in Birmingham, has found that AI has alleviated a chunk of the administration from her job, meaning she can focus more on patients. Dr Mirsa-Sharp started using Heidi Health, a free AI-assisted medical transcription tool that listens and transcribes patient appointments, about four months ago and says it has made a big difference. “Usually when I’m with a patient, I am writing things down and it takes away from the consultation,” she says. “This now means I can spend my entire time locking eyes with the patient and actively listening. It makes for a more quality consultation.” She says the tech reduces her workflow, saving her “two to three minutes per consultation, if not more”. She reels off other benefits: “It reduces the risk of errors and omissions in my medical note taking.” With a workforce in decline while the number of patients continues to grow, GPs face immense pressure. A single full-time GP is now responsible for 2,273 patients, up 17% since September 2015, according to the British Medical Association (BMA). Could AI be the solution to help GP’s cut back on administrative tasks and alleviate burnout? Some research suggests it could. A 2019 report prepared by Health Education England estimated a minimal saving of one minute per patient from new technologies such as AI, equating to 5.7 million hours of GP time. Meanwhile, research by Oxford University in 2020, found that 44% of all administrative work in General Practice can now be either mostly or completely automated, freeing up time to spend with patients. Corti Lars Maaloe (left) and Andreas Cleve co-founders of Danish medical AI firm Corti One company working on that is Denmark’s Corti, which has developed AI that can listen to healthcare consultations, either over the phone or in person, and suggest follow-up questions, prompts, treatment options, as well as automating note taking. Corti says its technology processes about 150,000 patient interactions per day across hospitals, GP surgeries and healthcare institutions across Europe and the US, totalling about 100 million encounters per year. “The idea is the physician can spend more time with a patient,” says Lars Maaløe, co-founder and chief technology officer at Corti. He says the technology can suggest questions based on previous conversations it has heard in other healthcare situations. “The AI has access to related conversations and then it might think, well, in 10,000 similar conversations, most questions asked X and that has not been asked,” says Mr Maaløe. “I imagine GPs have one consultation after another and so have little time to consult with colleagues. It’s giving that colleague advice.” He also says it can look at the historical data of a patient. “It could ask, for example, did you remember to ask if the patient is still suffering from pain in the right knee?” But do patients want technology listening to and recording their conversations? Mr Maaløe says “the data is not leaving system”. He does say it is good practice to inform the patient, though. “If the patient contests it, the doctor cannot record. We see few examples of that as the patient can see better documentation.” Dr Misra-Sharp says she lets patients know she has a listening device to help her take notes. “I haven’t had anyone have a problem with that yet, but if they did, I wouldn’t do it.” C the signs C the Signs software is used to analyse a patients medical record Meanwhile, currently, 1,400 GP practices across England are using the C the Signs, a platform which uses AI to analyse patients’ medical records and check different signs, symptoms and risk factors of cancer, and recommend what action should be taken. “It can capture symptoms, such as cough, cold, bloating, and essentially in a minute it can see if there’s any relevant information from their medical history,” says C the Signs chief executive and co-founder Dr Bea Bakshi, who is also a GP. The AI is trained on published medical research papers. “For example, it might say the patient is at risk of pancreatic cancer and would benefit from a pancreatic scan, and then the doctor will decide to refer to those pathways,” says Dr Bakshi. “It won’t diagnose, but it can facilitate.” She says they have conducted more than 400,000 cancer risk assessments in a real-world setting, detecting more than 30,000 patients with cancer across more than 50 different cancer types. An AI report published by the BMA this year found that “AI should be expected to transform, rather than replace, healthcare jobs by automating routine tasks and improving efficiency”. In a statement, Dr Katie Bramall-Stainer, chair of General Practice Committee UK at the BMA, said: “We recognise that AI has the potential to transform NHS care completely – but if not enacted safely, it could also cause considerable harm. AI is subject to bias and error, can potentially compromise patient privacy and is still very much a work-in-progress. “Whilst AI can be used to enhance and supplement what a GP can offer as another tool in their arsenal, it’s not a silver bullet. We cannot wait on the promise of AI tomorrow, to deliver the much-needed productivity, consistency and safety improvements needed today.” Alison Dennis, partner and co-head of law firm Taylor Wessing’s international life sciences team, warns that GPs need to tread carefully when using AI. “There is the very high risk of generative AI tools not providing full and complete, or correct diagnoses or treatment pathways, and even giving wrong diagnoses or treatment pathways i.e. producing

Netflix to raise prices as Squid Game and sport fuels subscribers

Netflix will raise prices across a number of countries after adding nearly 19 million subscribers in the final months of 2024. The streaming firm said it will increase subscription costs in the US, Canada, Argentina and Portugal. “We will occasionally ask our members to pay a little more so that we can re-invest to further improve Netflix,” it said. Netflix announced better-than-expected subscriber numbers, helped by the second series of South Korean drama Squid Game as well as sports including the boxing match between influencer-turned-fighter Jake Paul and former world heavyweight champion Mike Tyson. In the US, prices will increase across almost all plans including the standard subscription with no adverts which will now cost $17.99 (£14.60) a month, up from $15.49. Its membership with adverts will also rise, by one dollar to $7.99. The last time Netflix raised prices in the US was October 2023, when it also lifted costs for some plans in the UK. Asked if prices were set to increase in the UK, a spokesperson for Netflix said there was “nothing to share right now”. Meanwhile, the company said it finished last year with more than 300 million subscribers in total. It had been expected to add 9.6 million new subscribers between October and December but far surpassed that number. It is the last time that Netflix will report quarterly subscriber growth – from now on it said it will “continue to announce paid memberships as we cross key milestones”. As well as Squid Game and the Paul v Tyson fight, Netflix also streamed two NFL games on Christmas Day. It will also broadcast more live events including WWE wrestling and has bought the rights for the FIFA Women’s World Cup in 2027 and 2031. Paolo Pescatore, a technology analyst at PP Foresight, said Netflix “is now flexing its muscles by adjusting prices given its far stronger and diversified programming slate compared to rivals”. Net profit between October and December doubled to $1.8bn compared to the same period a year ago. Sales rose from $8.8bn to $10.2bn. Source link