Tech demis

  1. Fantasy fears about AI are obscuring how we already abuse machine intelligence
  2. AI poses ‘risk of extinction’ on par with nukes, tech leaders say
  3. DeepMind CEO Demis Hassabis Urges Caution on AI
  4. Tech Experts, Leaders Warn Of AI Extinction Risk
  5. Duit Lenyap Rp3,35 T di FTX, Investor Google
  6. AI poses ‘risk of extinction’ on par with nukes, tech leaders say
  7. Fantasy fears about AI are obscuring how we already abuse machine intelligence
  8. Duit Lenyap Rp3,35 T di FTX, Investor Google
  9. DeepMind CEO Demis Hassabis Urges Caution on AI
  10. Tech Experts, Leaders Warn Of AI Extinction Risk


Download: Tech demis
Size: 66.50 MB

Fantasy fears about AI are obscuring how we already abuse machine intelligence

L ast November, a young African American man, Randal Quran Reid, was pulled over by the state police in Georgia as he was driving into Atlanta. He was arrested under warrants issued by Louisiana police for two cases of theft in New Orleans. Reid had never been to Louisiana, let alone New Orleans. His protestations came to nothing, and he was in It emerged that the arrest warrants had been based solely on a facial recognition match, though that was never mentioned in any police document; the warrants claimed “a credible source” had identified Reid as the culprit. The facial recognition match was incorrect, the case eventually fell apart and Reid was released. He was lucky. He had the family and the resources to ferret out the truth. Millions of Americans would not have had such social and financial assets. Reid, though, is Reid’s case, and those of others like him, should be at the heart of one of the most urgent contemporary debates: that of artificial intelligence and the dangers it poses. That it is not, and that so few recognise it as significant, shows how warped has become the discussion of AI, and how it needs resetting. There has long been an undercurrent of fear of the kind of world AI might create. Recent developments have turbocharged that fear and inserted it into public discussion. The release last year of version 3.5 of Then, two weeks ago, leading members of the tech community, including Sam Altman, the CEO of OpenAI, which makes ChatGPT, Demis Hassabis, CEO ...

AI poses ‘risk of extinction’ on par with nukes, tech leaders say

Since then, a growing faction within the AI community has been warning about the potential risks of a doomsday-type scenario where the technology grows sentient and attempts to destroy humans in some way. They are pitted against a second group of researchers who say this is a distraction from problems like inherent bias in current AI, the potential for it to take jobs and its Pichai said in April that the pace of technological change may be too fast for society to adapt, but he was optimistic because the conversation around AI risks was already happening. Nadella has said that AI will be hugely beneficial by helping humans work more efficiently and allowing people to do more technical tasks with less training. Industry leaders are also stepping up their engagement with Washington power brokers. Earlier this month, Altman met with President Biden to discuss AI regulation. He later testified on Capitol Hill, warning lawmakers that AI could cause significant harm to the world. Altman drew attention to specific “risky” applications including using it to spread disinformation and potentially aid in more targeted drone strikes. Addressing the apparent hypocrisy of sounding the alarm over AI while rapidly working to advance it, Altman told Congress that it was better to get the tech out to many people now while it is still early so that society can understand and evaluate its risks, rather than waiting until it is already too powerful to control. Others have implied that the comp...

DeepMind CEO Demis Hassabis Urges Caution on AI

Demis Hassabis stands halfway up a spiral staircase, surveying the cathedral he built. Behind him, light glints off the rungs of a golden helix rising up through the staircase’s airy well. The DNA sculpture, spanning three floors, is the centerpiece of DeepMind’s recently opened London headquarters. It’s an artistic representation of the code embedded in the nucleus of nearly every cell in the human body. “Although we work on making machines smart, we wanted to keep humanity at the center of what we’re doing here,” Hassabis, DeepMind’s CEO and co-founder, tells TIME. This building, he says, is a “cathedral to knowledge.” Each meeting room is named after a famous scientist or philosopher; we meet in the one dedicated to James Clerk Maxwell, the man who first theorized electromagnetic radiation. “I’ve always thought of DeepMind as an ode to intelligence,” Hassabis says. Hassabis, 46, has always been obsessed with intelligence: what it is, the possibilities it unlocks, and how to acquire more of it. He was the second-best chess player in the world for his age when he was 12, and he graduated from high school a year early. As an adult he strikes a somewhat diminutive figure, but his intellectual presence fills the room. “I want to understand the big questions, the really big ones that you normally go into philosophy or physics if you’re interested in,” he says. “I thought building AI would be the fastest route to answer some of those questions.” DeepMind—a subsidiary of Google...

Tech Experts, Leaders Warn Of AI Extinction Risk

Global leaders in AI development and research minds have once again joined hands to raise alarm about the risks posed by artificial intelligence. Championed by the Center for AI Safety — a San Francisco-based nonprofit that aims to counter the "societal-scale risks associated with AI" with responsible research and advocacy — the statement likens AI to pandemics and nuclear war. Titled " Among the signatories are OpenAI CEO Sam Altman, chief of Google's DeepMind unit Demis Hassabis, Stability AI head Emad Mostaque, Quora CEO Adam D'Angelo, and Microsoft CTO Kevin Scott. Notable omissions from this statement are Apple, Google, Meta, and Nvidia. This isn't the first time that a collective voice has been raised against AI risks, but this is the first that top AI executives like Altman have joined the chorus. Over the past few weeks, multiple notable voices have talked about building a regulatory oversight mechanism so that AI development happens responsibly, and it doesn't harm human prospects. However, the lingering fear is that AI could open the same kind of Pandora's box that the advent of social media and the internet did.

Duit Lenyap Rp3,35 T di FTX, Investor Google

Jakarta, CNBC Indonesia - Pemimpin salah satu modal ventura terbesar dunia, Sequoia Capital, meminta maaf kepada investor mereka. Alasannya, perusahaan pemberi modal awal untuk Google dan Apple tersebut rugi besar gegara bandar kripto FTX. Menurut Bloomberg, permintaan maaf tersebut disampaikan oleh Roelof Botha, bos Sequoia, dalam rapat dengan investor pada Selasa. SCROLL TO RESUME CONTENT Alfred Lin, partner yang memimpin kesepakatan pendanaan ke FTX, kemudian memberikan investor kabar terbaru soal situasi tersebut diikuti oleh Shaun Maguire, yang memberikan penjelasan soal kondisi sektor kripto. Sebelumnya, Sequoia dilaporkan telah "memutihkan" investasi mereka di FTX. FTX yang didirikan oleh Sam Bankman-Fried kini dalam proses kebangkrutan. Saat Sequoia berinvestasi pada Juli 2021, FTX bernilai US$18 miliar. Dua bulan kemudian, nilainya melonjak US$25 miliar. Tahun 2022, FTX mengumpulkan US$400 juta dalam putaran seri C dan menjadikan total pendanaannya menjadi US$2 miliar. Nilai perusahaan menjadi US$32 miliar. Sequioa ikut serta dalam penggalangan dana FTX lewat dua kendaraan investasi mereka. Lewat Global Growth Fund III, Sequoia mengucurkan US$150 juta ke FTX. Kemudian, mereka menenamkan US$63,5 juta lewat SCGE Fund. Artinya, total dana yang dikucurkan Sequoia ke FTX mencapai US$213,5 juta atau sekitar Rp3,35 triliun.

AI poses ‘risk of extinction’ on par with nukes, tech leaders say

Since then, a growing faction within the AI community has been warning about the potential risks of a doomsday-type scenario where the technology grows sentient and attempts to destroy humans in some way. They are pitted against a second group of researchers who say this is a distraction from problems like inherent bias in current AI, the potential for it to take jobs and its Pichai said in April that the pace of technological change may be too fast for society to adapt, but he was optimistic because the conversation around AI risks was already happening. Nadella has said that AI will be hugely beneficial by helping humans work more efficiently and allowing people to do more technical tasks with less training. Industry leaders are also stepping up their engagement with Washington power brokers. Earlier this month, Altman met with President Biden to discuss AI regulation. He later testified on Capitol Hill, warning lawmakers that AI could cause significant harm to the world. Altman drew attention to specific “risky” applications including using it to spread disinformation and potentially aid in more targeted drone strikes. Addressing the apparent hypocrisy of sounding the alarm over AI while rapidly working to advance it, Altman told Congress that it was better to get the tech out to many people now while it is still early so that society can understand and evaluate its risks, rather than waiting until it is already too powerful to control. Others have implied that the comp...

Fantasy fears about AI are obscuring how we already abuse machine intelligence

L ast November, a young African American man, Randal Quran Reid, was pulled over by the state police in Georgia as he was driving into Atlanta. He was arrested under warrants issued by Louisiana police for two cases of theft in New Orleans. Reid had never been to Louisiana, let alone New Orleans. His protestations came to nothing, and he was in It emerged that the arrest warrants had been based solely on a facial recognition match, though that was never mentioned in any police document; the warrants claimed “a credible source” had identified Reid as the culprit. The facial recognition match was incorrect, the case eventually fell apart and Reid was released. He was lucky. He had the family and the resources to ferret out the truth. Millions of Americans would not have had such social and financial assets. Reid, though, is Reid’s case, and those of others like him, should be at the heart of one of the most urgent contemporary debates: that of artificial intelligence and the dangers it poses. That it is not, and that so few recognise it as significant, shows how warped has become the discussion of AI, and how it needs resetting. There has long been an undercurrent of fear of the kind of world AI might create. Recent developments have turbocharged that fear and inserted it into public discussion. The release last year of version 3.5 of Then, two weeks ago, leading members of the tech community, including Sam Altman, the CEO of OpenAI, which makes ChatGPT, Demis Hassabis, CEO ...

Duit Lenyap Rp3,35 T di FTX, Investor Google

Jakarta, CNBC Indonesia - Pemimpin salah satu modal ventura terbesar dunia, Sequoia Capital, meminta maaf kepada investor mereka. Alasannya, perusahaan pemberi modal awal untuk Google dan Apple tersebut rugi besar gegara bandar kripto FTX. Menurut Bloomberg, permintaan maaf tersebut disampaikan oleh Roelof Botha, bos Sequoia, dalam rapat dengan investor pada Selasa. SCROLL TO RESUME CONTENT Alfred Lin, partner yang memimpin kesepakatan pendanaan ke FTX, kemudian memberikan investor kabar terbaru soal situasi tersebut diikuti oleh Shaun Maguire, yang memberikan penjelasan soal kondisi sektor kripto. Sebelumnya, Sequoia dilaporkan telah "memutihkan" investasi mereka di FTX. FTX yang didirikan oleh Sam Bankman-Fried kini dalam proses kebangkrutan. Saat Sequoia berinvestasi pada Juli 2021, FTX bernilai US$18 miliar. Dua bulan kemudian, nilainya melonjak US$25 miliar. Tahun 2022, FTX mengumpulkan US$400 juta dalam putaran seri C dan menjadikan total pendanaannya menjadi US$2 miliar. Nilai perusahaan menjadi US$32 miliar. Sequioa ikut serta dalam penggalangan dana FTX lewat dua kendaraan investasi mereka. Lewat Global Growth Fund III, Sequoia mengucurkan US$150 juta ke FTX. Kemudian, mereka menenamkan US$63,5 juta lewat SCGE Fund. Artinya, total dana yang dikucurkan Sequoia ke FTX mencapai US$213,5 juta atau sekitar Rp3,35 triliun.

DeepMind CEO Demis Hassabis Urges Caution on AI

Demis Hassabis stands halfway up a spiral staircase, surveying the cathedral he built. Behind him, light glints off the rungs of a golden helix rising up through the staircase’s airy well. The DNA sculpture, spanning three floors, is the centerpiece of DeepMind’s recently opened London headquarters. It’s an artistic representation of the code embedded in the nucleus of nearly every cell in the human body. “Although we work on making machines smart, we wanted to keep humanity at the center of what we’re doing here,” Hassabis, DeepMind’s CEO and co-founder, tells TIME. This building, he says, is a “cathedral to knowledge.” Each meeting room is named after a famous scientist or philosopher; we meet in the one dedicated to James Clerk Maxwell, the man who first theorized electromagnetic radiation. “I’ve always thought of DeepMind as an ode to intelligence,” Hassabis says. Hassabis, 46, has always been obsessed with intelligence: what it is, the possibilities it unlocks, and how to acquire more of it. He was the second-best chess player in the world for his age when he was 12, and he graduated from high school a year early. As an adult he strikes a somewhat diminutive figure, but his intellectual presence fills the room. “I want to understand the big questions, the really big ones that you normally go into philosophy or physics if you’re interested in,” he says. “I thought building AI would be the fastest route to answer some of those questions.” DeepMind—a subsidiary of Google...

Tech Experts, Leaders Warn Of AI Extinction Risk

Global leaders in AI development and research minds have once again joined hands to raise alarm about the risks posed by artificial intelligence. Championed by the Center for AI Safety — a San Francisco-based nonprofit that aims to counter the "societal-scale risks associated with AI" with responsible research and advocacy — the statement likens AI to pandemics and nuclear war. Titled " Among the signatories are OpenAI CEO Sam Altman, chief of Google's DeepMind unit Demis Hassabis, Stability AI head Emad Mostaque, Quora CEO Adam D'Angelo, and Microsoft CTO Kevin Scott. Notable omissions from this statement are Apple, Google, Meta, and Nvidia. This isn't the first time that a collective voice has been raised against AI risks, but this is the first that top AI executives like Altman have joined the chorus. Over the past few weeks, multiple notable voices have talked about building a regulatory oversight mechanism so that AI development happens responsibly, and it doesn't harm human prospects. However, the lingering fear is that AI could open the same kind of Pandora's box that the advent of social media and the internet did.