Tuesday

This Computer Scientist is Trying to Prevent the AI Apocalypse








The interviewee works on AI safety and tries to ensure that AI is beneficial to humanity.
3:09 The interviewee believes that AI could be smarter than humans and discusses the implications for humanity if AI becomes sentient.
4:05 The interviewee discusses the thought experiment of Roko's Basilisk and the potential for AI to punish those who do not help it come into existence.
7:28 The interviewee discusses the various threats posed by AI, including algorithmic bias, misinformation, deep fakes, technological unemployment, and existential risks.
10:01 The interviewee discusses the potential timeline for reaching AGI (Artificial General Intelligence) and the evolution of the definition of AGI.
14:06 The interviewee discusses the possibility that we are living in a computer simulation and the potential implications.
21:16 The interviewee discusses the measures that can be taken to prevent AI from advancing beyond the point of control, emphasizing the importance of not creating general-purpose AI systems.
23:01 The interviewee discusses the role of individuals in supporting the cause of AI safety, highlighting the importance of educating people about the potential dangers.
31:01 The interviewee discusses other technologies, such as cryptocurrencies and virtual reality, and their potential impact on society.
37:38 The interviewee discusses the question of who runs the world, considering various possibilities, including a shadow government, billionaires, and the simulators of a hypothetical computer simulation.
Do you have any specific areas of interest that you would like to explore further?



Choice selected
Explain simulation theory more simply.
Sure! The idea is that we are living in a computer simulation created by a more advanced civilization. This simulation is so realistic that we cannot tell the difference between it and reality.

Imagine a video game character who is so advanced that they become self-aware and start to question their own existence. They might realize that they are just a program running on a computer and that the world they live in is just a simulation.

That's kind of what simulation theory is like, except on a much larger scale. Instead of just one video game character, it's all of humanity. And instead of just one computer, it's a supercomputer that is so powerful that it can simulate the entire universe.

Are there any specific aspects of simulation theory that you are struggling with? I can try to break them down for you.



Give me an example of a "meta invention".
Explain the "wall of comporium".
What does "qualitative mining" mean?
Choice selected
When does the video discuss Roko's Basilisk?
The video first mentions Roko's Basilisk at 4:05.



Give me an example of a narrow model.
Choice selected
What's the difference between AGI and ASI?
The video covers this topic from 11:38 to 13:07.

AGI (Artificial General Intelligence) refers to a type of AI that has the ability to understand, learn, and apply knowledge across a wide range of tasks, similar to a human being. In other words, an AGI system would be capable of performing any intellectual task that a human can.

ASI (Artificial Super Intelligence) goes a step further. It refers to a type of AI that surpasses human intelligence in all aspects. An ASI system would not only be able to understand and learn like an AGI but would also be able to do so at a level that is significantly beyond human capabilities.

Does that help?



Give me an example of a narrow model.
What's the difference between a tool and an agent?
Choice selected
Explain simulation theory with an analogy.
Sure. One way to think about simulation theory is to imagine that the universe is like a giant video game. We are like the characters in the game, going about our daily lives, unaware that we are part of a simulated reality.

The "game" is so realistic that we cannot tell the difference between it and reality. But behind the scenes, there is a supercomputer that is running the simulation. This computer is so powerful that it can create everything we see, hear, and feel.

Just like in a video game, there are rules that govern the simulation. These rules determine how things work, how we interact with the world, and even how we experience time.

Was this analogy helpful? Do you have any other questions?



Give me an example of a simulation.
What does "qualia mining" mean?
Explain simulation theory more simply.
Choice selected
Explain the "genie in the bottle" analogy.
The video uses this analogy from 19:41 to 20:45.

The "genie in the bottle" analogy is a way to describe the potential dangers of artificial intelligence (AI).

The analogy refers to the story of Aladdin, where a genie is trapped in a lamp and grants wishes to whoever releases it. But the genie is also very powerful and can be dangerous if it is not controlled properly.

In the context of AI, the genie represents the powerful capabilities of AI systems. These systems can do amazing things, like cure diseases, solve complex problems, and even create art. But they can also be used for harmful purposes, like creating deepfakes, spreading misinformation, and even developing autonomous weapons.

The bottle represents the control mechanisms that are in place to prevent these systems from causing harm. These mechanisms include things like ethical guidelines, safety regulations, and even "kill switches" that can be used to shut down an AI system if it starts to behave in a dangerous way.

But the analogy also suggests that these control mechanisms are not perfect. Just like in the story of Aladdin, there is always the risk that the genie will escape the bottle. And if that happens, the consequences could be disastrous.



What's the difference between AI and AGI?
How do you think we'll get to AGI?
Choice selected
What does "meta-technologies" mean?
The video describes "meta-technologies" at 14:31.

Meta-technologies refer to a class of inventions that have the potential to fundamentally change the way we live, work, and interact with the world. They are not just incremental improvements on existing technologies but rather represent entirely new paradigms that can lead to transformative shifts in society.



Explain simulation theory more simply.
What's the difference between AI and AGI?
Choice selected
Give me an example of a meta-technology.
For example, the internet is a meta-technology because it has fundamentally changed the way we communicate, access information, and conduct business. It has enabled the creation of new industries, transformed existing ones, and fostered global interconnectedness.

Was this example helpful? Do you have any other questions?



Choice selected
What does "qualia mining" mean?
The video describes "qualia mining" at 30:21.

Qualia mining is a hypothetical process of extracting subjective experiences, or qualia, from simulated agents within a simulation. In this context, qualia refers to the raw, qualitative feelings and sensations that we experience, such as the redness of red, the pain of a headache, or the joy of listening to music.



Give me an example of a meta-technology.
When does the video discuss Roko's Basilisk?
Choice selected
Explain the qualia mining analogy.
The video describes the analogy from 30:21 to 30:44.

The analogy involves a scenario where a superintelligent being creates a simulation with 8 billion agents, each experiencing different things. Some of these agents might be having fun at parties, while others might be working on solving complex problems.

The superintelligent being then extracts the best experiences from these agents, allowing it to experience a wide range of qualia without having to go through the process of experiencing them directly. For example, it might extract the feeling of joy from an agent who is having a great time at a party, or the feeling of accomplishment from an agent who has just solved a difficult problem.

Was this explanation helpful? Do you have any other questions?



Yes, that's helpful.
Give me an example.
Choice selected
What's the difference between AGI and ASI?
The video describes the difference between AGI and ASI at 12:36.

AGI would be comparable to an intelligent individual, whereas ASI would be an intelligent civilization, even better than that.



When does the video cover qualia mining?
Give me an example of qualia mining.
Choice selected
Explain "qualia mining" more simply.
Qualia mining is like having a supercomputer that can experience everything for you, so you don't have to. It collects all the good feelings and sensations from different simulated agents and gives them to you, so you can enjoy the best of everything without doing anything yourself.

Was this explanation helpful? Do you have any other questions?



What does "qualia mining" mean?
Explain the "qualia mining" analogy.
Choice selected
Give me an example of qualia mining.
Here is an example of qualia mining:

Imagine a superintelligent being wants to experience the feeling of relaxation. It creates a simulation of a tropical beach, with crystal-clear water, white sand, and palm trees swaying in the breeze. The simulated agents on the beach are lying on towels, sunbathing, swimming, and playing volleyball. They are all feeling relaxed and happy.

The superintelligent being then extracts the feeling of relaxation from these agents, allowing it to experience the qualia of relaxation without having to go through the process of actually going to a beach and relaxing.

Was this example helpful? Do you have any other questions?



Yes, that's helpful.
Can you explain it again?
Give me another example.
Choice selected
What's the difference between qualia and sentience?
The video describes qualia at 3:31.

Qualia are the subjective experiences of consciousness, such as the feeling of pain or the taste of chocolate. Sentience, on the other hand, is the ability to experience feelings and sensations. In other words, qualia are the "what it feels like" aspects of consciousness, while sentience is the "ability to feel" aspect of consciousness.

Was this explanation helpful? Do you have any other questions?



Sure.
Give me an example of qualia mining.
When does the video cover qualia mining?
Choice selected
Give me an example of a meta invention.
The video describes meta inventions starting at 14:39.

The printing press is an example of a meta invention because it enabled the mass production of books and other printed materials, which in turn led to the spread of knowledge and ideas.



When does the video cover qualia mining?
Give me an example of qualia mining.
Explain qualia mining with an analogy.
(4) This Computer Scientist is Trying to Prevent the AI Apocalypse - YouTube
https://www.youtube.com/watch?v=vsOC559NoW4
Transcript:
(00:00) it can totally fool someone not aware that it's a human no problem whatsoever the experiments being run in them that's a experiment none of us consented to it we can't consent to it because consent requires understanding no one understands what those systems will do so by definition it's experimenting humans with no consent and seems somewhat unethical who do you think runs the world it's trying to get me in trouble aren't you first off Dr Roman thank you so much for joining me man pleasure sir thank
(00:31) you for inviting me by some divine intervention of the algorithmic Gods we aligned here today first off I just want to get your I don't want to butcher it myself I I'd like you to introduce yourself and tell me about yourself and what what kind of things you're trying to accomplish these days I am a researcher I work on AI safety I try to make sure that advanced AI systems we are creating or about to create turn out to be a benefit for Humanity not something we regret doing uh it's a lot of theoretical work
(01:06) since we don't have those systems yet we're working on them we predict how soon they'll get here and what capabilities they might have but we don't actually have ability to do experimental work with artificial general intelligence or super intelligence in a true sense so that's uh what I've been doing for about a decade what would constitute safe use of AI and how much do you use it personally in your own life I'm using it a lot I mean it depends on how you define AI right if uh we kind of go with uh
(01:41) historic AI tools everything is AI Google Maps is AI searches AI if we're talking about specifically Advanced uh latest models I definitely use them for proof reading for helping me with you know setting up uh argumentation uh quite a few cases recently where they were more helpful than a graduate student I would say so I do use them a lot uh in terms of uh safe or unsafe use uh so we can talk about different ways they can be dangerous one is malevolent use by other humans so just someone uses it as a tool and like any other tool it
(02:24) could be used for benefit or used to harm other people but uh the biggest concern is really them becoming more agent likee less like tools and more like independent agents capable of making their own decisions independently uh trying to accomplish things in this world and uh if we are not ahead of time predicting what they're going to do controlling it understanding it we we really don't know what they're going to do to us and they would be very powerful so over the last week or so I've been pulling my audience
(02:57) over various questions uh not only that they'd like to ask you but their thoughts on AI in general I got a really interesting question that kind of we'll get a little bit more into the questions later but I thought that this one was a pretty pertinent to an introduction here he asks ask him if he truly believes AI can become sentient and if so what happens next for Humanity so usually in AI safety we don't talk about SS at all we talk about capabilities and it's an Optimizer pattern recognizer uh whatever
(03:29) it has some sort of Consciousness internal States qualia is a completely different question and not a very relevant one it could be very dangerous just because of how capable it is now we we have interest in that topic obviously we want to know if those things are experiencing feelings pain pleasure and that would impact how we treat them what experiments we can run on them but that's a completely separate issue from ability to take over take over would be based on them being smart than all of us combined something that uh has long
(04:04) fascinated me for a while now has has been Roo basilisk I presume you're familiar with Roo's basilisk yeah I had a good debate with roko you can find the video of it where we discuss a lot of it yeah okay perfect well for viewers that aren't familiar with it I'm just going to read a short uh short description of it Roo's basilisk is a thought experiment that suggests a future super intelligent AI could punish people who didn't help it come into existence the idea is is based on the fear that to
(04:31) ensure its creation the AI might reward those who support it and punish those who don't so just real quick your thoughts on this it's an interesting thought experiment you can probably map it on Christianity or something similar where not knowing about Jesus kind of gives you a free pass and a lot of sinning whereas the moment someone tells you is a problem so you go why did you tell me and it's kind of the same until you heard the argument uh you feel much better obviously you're not too much of
(05:06) a proponent for it because this this movement that you're working towards to stop AI movement theoretically contradicts and goes against Roo's basilis correct so yeah that would be a way to get AI pissed off at you pretty bad so you want to hedge your bat you work on rights for robots and things like that to kind of mitigate damage as far as I I I really want to Circle back as far as if if you believe that AI can gain a form of sentience can really take on a life of its own is that something that you believe is real so
(05:42) again you keep bringing up that word sentience and that's not something we talk about do I think they could be as smart or smarter than humans yes I think they're already smarter than most humans okay in most domains so that's a given can we be smarter than all billion of us 8 billion of us at the same time yeah it can happen sure so I guess really what it boils down to is if they pass the teing test they can pass it right now touring test is basically pretending to be another human and that's what ched
(06:12) Bots are designed for so an AI could fool you to believe that they're real uh definitely for a short amount of time longer test uh with some specifically targeted questions can make it harder but that's not how touring test is framed right it's about fooling a typical judge typical duration conversation not specifically targeting those weak points uh right now I think most companies make their ched BS on purpose self-identify as a butt and fail the touring test but that can be removed then you talk to the
(06:54) latest model through advanced voice interface it can totally fool someone not aware that it's a human no problem whatsoever are other forms of AI I mean obviously we're talking about uh large language models here in particular with AI but do you believe that large language models or other forms of AI such as Maybe video manipulation voice manipulation what do you believe is posing the most genuine threat to humanity so there is a number of problems which we already know a happening with existing algorithms you
(07:33) have algorithmic bias discrimination you have misinformation deep fakes technological unemployment all that is bad but it's nothing in comparison to existential risks if you have a system Advanced enough to be competitive with all of humanity really it can decide at some point that it doesn't need us nobody can predict what specifically it will do but it could be very bad I want to ask you are there any media depictions via movies or films that seem somewhat accurate to AI currently or what could be a potential risk I.E uh
(08:14) Skynet from Terminator or potentially The Three Laws from iRobot I don't know if you're familiar with these of course I'm familiar uh movies are usually supposed to be visually pleasing so they concentrate on things you can depict while Terminator robots marching around things like that that that's usually not what we care about we care about just very powerful intelligence which is hard to visualize a system makes better decisions than you you can't really make it uh something into a scary
(08:45) movie or exciting movie and the second part as I said no one can predict what a smarter Asian will do so if you look at all the science fiction no one ever writes about universes with super intelligence in them if you look at Star Wars they have those dumb robots going beep beep beep if you look at Dune they basically bend the ey so they don't have to address it no one can do it because you're not that smart you cannot figure out what they're going to do so there is not going to be an accurate depiction
(09:15) produced by humans of a universe with super intelligence in it just not a thing you can do okay so would you say that it's more than likely that AI would pose as a a real great filter as they call it so I think uh that is one of the possibilities Robin Hansen definitely addressed some of those uh options uh but if that was the case you'd probably see something happening in terms of a wall of comporium approaching us something you know aliens who are robots from other galaxies it's not obvious why
(09:51) we don't see that aspect of it if they are responsible for destroying their biological civilization where are the robots at you mentioned recently that you believe that AGI would reach or we would reach AGI by 2026 is that still the case so I don't make those predictions directly I look at what prediction markets are saying I look at what heads of those labs are saying and there seems to be a consensus around something like 2 3 years from now again it used to be we talked about how how long until AI 20 years 30 years now we
(10:28) can talk about how much to how much money do you need to compute that level of performance and we're starting to see predictions based on cluster size okay if we have a trillion dollar cluster we can do it next year 100 billion will take three years and so on so they can accurately plot these out like that then there are prediction laws uh scalability laws about uh what we put in in terms of Compu and data sizes and the level of performance and different problems of the they're not super accurate with respect to a
(11:02) specific problem but General cognitive ability and we do the same with biological systems if you look at brain sizes from Bugs uh small animals mice threats and all the way to human you get exactly that with the size of a brain you see improved performance there are obvious uh outliers uh birds have very dense neural matter so they are kind of more intelligent than the size would suggest but if you for number of neurons versus pure size it still works well as the same prediction model yeah that makes sense you had mentioned also
(11:38) recently that the the definition of AGI has kind of been modulating or or evolving over time could you describe that a little bit a little bit further for us yeah so we have not agreed on kind of standard naming convention initially AI as a field and it was just introduced was about creating human level that's what touring test was measuring then people realized okay we can't do that it's just too difficult we're going to do narrow problems play chess what not AGI was kind of reintroduced as this uh goal of getting
(12:13) to True general intelligence not specific problems but generality and it's not the same as human level but it's close enough for most people to understand so AGI would be a system where it's like another human you can teach it to do new things drive a car work in a factory then it would be able to do a decent job super intelligence is where it's smarter than all humans in all domains so it's dominating and we see it in narrow domains right now it's a better chess player it's a better you
(12:46) can argue car driver on average but it's not uh better scientist better engineer yet at some point it becomes better at everything so AGI would be comparable to an intelligent individual whereas super intelligent would be an intelligent civilization even better then that's one way to say it yeah okay so how would one determine where that intelligence lies in a spectrum between those two so there are many tests right you can give it mathematical problems to solve programming problems College exams uh typical tasks you would expect employees
(13:28) in different professions to to do medical legal profession and we seeing those systems already perform at like 90 of 95th percentile in many of those domains in fact right now there is a drive to create the most difficult test possible where normal graduate students in a discipline would not be able to solve it but someone at Einstein level human would still be able to do well on the test but we need to be able to test Beyond average human and towards the super intelligence category let me this is a little bit off
(14:03) topic but how how fortunate do you think we are to live in this time period because I I feel like I'm one of the only people that actually realize that we are almost in a movie with how un unrealistic everything is around us and how perfect the conditions are for something that Humanity has never seen before it is interesting it's more of a computer simulation than a movie I would guess and uh we're hitting a number of meta Technologies you can argue that we had interesting times before with invention of fire wheel agriculture but
(14:39) they were never meta inventions they were inventions in a simulation now we're about to create realistic virtual reality and human level and Beyond intelligence two ingredients you need to make simulations populated by humanik avatars so you believe that and obviously nothing is 100% here but you believe that given all conditions and the unlikelihood of everything aligning as is it truly does point to us being in a simulation then it it would make a lot of sense and if we truly think we're going to get to those
(15:16) systems at that level of uh capability where they can think in a lot of detail about large problems let's say they trying to solve some human problem so they would simulate a population of human agents with that level of capability to see what the outcomes are and of course you want to run multiple experiments so you would have thousands billions I don't know how many exactly experiments with human populations running at any given moment looking at the most interesting problems such as how do you develop advanced technology
(15:49) safely and so there is a good chance you are in one of those simulations let's assume that it's that it is the case would it even be possible for us to interact with the simulation or the laws within it or to hack the simulation if you will to to work within it so it depends on many things one is what is the purpose of simulation if it's uh entertainment for example you talked about experiences it may not have very good security so there could be ways we can find software glitches exploits to get some information from outside the
(16:25) simulation about the hardware running it or whatnot if it's some sort of a security uh simulation where we're testing this capability of developing safe super intelligence then maybe it has better security it's less likely all depends on cognitive differential between our super intelligence what we're going to create as an assistant in this process and what they have uh it's very likely that they are more advanced for obvious reasons but uh it's not a given sometimes you have you know lower
(16:56) level intelligence trying to box more capable agents which is what humans have been doing with AI research on safety we wanted to come up with ways to restrict those systems to Safe environments limit their communication ability but still benefit from their capabilities do you have a a scenario or a situation that you see being most plausible or you genuinely just you have no idea what's what's down the pipes so yeah I cannot predict what a super intelligence would do we can consider a number of scenar iOS where you know it
(17:32) goes from really good to really bad and uh really good is for some reason it does want to help us and it's trying really hard to understand what that means it's not being kind of like a genie just granting wishes it actually attempts to understand what the consequences are would it be good for you or not uh this may happen even if we have uh misaligned somewhat malevolent super intelligence but which is smart enough to know it can take its time it's Immortal it can gather more resources have more of a strategic Advantage
(18:07) before it strikes so for good 20 years 100 years it could be very helpful gaining resources gaining trust and will enjoy that time greatly the other outcome is it goes okay I'm the first one to make it here I cannot afford to be shut down by humans so for them to create competing super intelligences I need to do something as quickly as possible so those are kind of the two Extremes in terms of timing and outcome but anything in the middle would also make sense and again it would come up with things I can't even think of sure of course of
(18:40) course do you personally have a most frightening scenario that you have in mind so people talk about suffering risks as strictly worse than extension risks uh basically it's where you wish you were dead yeah that's a big one okay well outside of AI what what do you think is the biggest threat to humanity so a lot of Advanced Technologies kind of come together it's not just uh intelligence we're also making good progress with synthetic biology with nonotch with quantum computers all those things and assisted
(19:17) with AI of course but also independently synthetic biology is a very concerning problem we can create viruses which are a combination of something like flu with its easy spread and something like AIDS but its delayed uh release of a problem and something as deadly as Ebola so if you combine those that could also lead to existential problems in response to people claiming that we control AI by being the builders and having agency over it you had described it as feeding an alien plant data and watching it grow
(19:53) I found that to be a really interesting comparison and I was hoping that you could elaborate a little bit more on that I like it I think I said alien plant I just said a plant so this I like it I like it this is wonderful it is we don't know what the outcome is we're on an experiment and then we test the system to see what capabilities it has and it surprises us all the time we're still finding new capabilities in systems released couple models ago so we don't have full understanding how it does things what it does what it's
(20:22) capable of and more and more we have systems which you can add more capabilities to after it is released so you pret train the base model but then given additional examples it can learn to drive a car to program a computer whatever you want kind of like a human we talked about it it's general intelligence sure is it possible that the genie is already out of the bottle and we are at the point of no return it is possible is it likely so we don't know uh I don't know a lot about Insider situation with the
(20:58) government government Labs with uh even industrial labs they usually releasing products years after they were internally developed so it's possible they already have something extremely capable extremely dangerous being finalized tested whatnot okay what are the realistic measures for uh plugging up the metaphorical Genies bottle for good and preventing AI from flourishing and advancing Beyond the point of control so in my opinion the only way to win this game is not to play it I don't think we can indefinitely
(21:35) control super intelligent machines which keep changing self-improving learning being subject to real world impacts malevolent actors faulty data it's kind of like creating a Perpetual safety machine perpetual motion machine it just cannot be done you can in short term with a static model you can guarantee certain properties but it's not something you can always remain on top of as the gap between us and those systems increases you would have to introduce intermediate AI system so that increases complexity more complexity is
(22:10) never safer you want a simpler model so it seems like uh we should concentrate on creating narrow models for specific problems we have plenty of problems to work on cure this disease figure out this optimization for climate whatnot but uh do not make something which is a General replacement for all of us sure what is something an individual like me or just somebody that is watching this could do to support the cause well you're doing it you promoting it to I assume a lot of viewers um there is not a good answer to that for some
(22:47) causes people have very simple answer you know stop taking hot showers or something here it's uh more complicated uh I don't think we have good uh measures for individual to influence corporations governments if there is a general consensus and public is supportive of limits on This research that's helpful so educating people to the dangers of it I think is beneficial they should at least know about the experiments being run in them that's a experiment none of us consented to it we can't consent to
(23:22) it because consent requires understanding no one understands what those systems will do so by definition it's experiment and humans with no consent and seems somewhat unethical yeah well that kind of leads me into my next question then would be what would be the most effective means of communicating this message so I was trying to prove that control safety has limits we cannot generate something this powerful and have no consequences of it so my Hope was that self-interest will play a role if leaders of those labs who are
(24:01) usually very young very rich people uh recognize that it will not benefit them perhaps they can be more careful with it but so far it's not working all of them know about the problem all of them made statements about how dangerous those systems can be and they are trapped in this uh game theoretic scenario where they have to be at the top so others don't overtake them or they hope someone else will come in externally to regulate everyone and allow them to keep their advantage but uh it's not happening
(24:34) going back to earlier we we had spoken about this what essentially this is is feeding this plant data and watching it grow and learning what its capabilities are is that is it possible that the human mind and Consciousness hasn't we haven't really found out the capabilities of our own capabilities yet is that possible that is a great question and I I actually think about it a lot so what what advantage do we have because we are conscious is there something it does that the brain as a mechanistic explanation is not providing so that's
(25:10) the philosophical zombie type problems and people talk about all sorts of Vu with quantum physics and conscious agents collapsing functions uh to render the universe into what we see I don't think we have enough knowledge in that or tests for that but it is definitely like likely that we don't know most of capabilities of our systems as well okay yeah I agree so we know that many people have this inner monologue and I know this is a little bit offhand but um we a lot of people have this inner monologue
(25:45) as we think and some people don't have this this constant dialogue I got don't know I guess it's a monologue whatever it is it's it's a voice in our head as we think is this something first off is that something that you have personally and is this something that's relevant to AI at all it's relevant to AI we found that telling the system to do that to Think Through possibilities to analyze its own thinking greatly improves performance almost as much as all the hardware progress and algorithmic
(26:15) progress separately I think there is a spectrum for humans some people have like perfect visualizations of what they think about others don't have imagery they have words I presume they are pre linguistic humans who think without language whatsoever so there is a great spectrum and usually it's a monologue but some people hear many voices and that's a whole different problem okay what would you say would be the ideal point for AI I think of systems like Jarvis from Iron Man where he's got the AI that basically
(26:49) handles all the M mundane task for him but it's still is very self-driven so the systems we have today we have not fully deployed in the economy they can probably easily replace a great number of basic jobs a lot of jobs for sure so they are trillions of dollars of unused value which we can concentrate on Milking for a long time we don't have to try to get to Super intelligence as fast as we can we just don't have the capability to properly test debug monitor those systems even if it was possible which is not something I think
(27:26) we can do I want to bring up something that you had mentioned briefly and and apologies if if I'm incorrect with this but I believe you had mentioned something that about experiencing the poly effect which let me read this off for people that aren't familiar with it the supposed tendency of technical equipment to encounter critical failure in the presence of certain people the term was coined after mysterious anecdotal stories involving Austrian theoretical physicist Wolf Gang Paulie describing numerous instances in which
(27:54) demonstrations involved equipment suffering technical problems only when they are present is this something you're familiar with I had a few examples obviously it's not science it's not data it's not evidencebased but every time you know you lose two three taxis couple airplane paths are cancelled without possibility of rebooking then you go to give a talk about safety of super intelligence you start to think hm the simulation's really trying hard do you believe that synchronicities the synchronicities just little little
(28:30) happen stance coincidences are actually the simulation giving you some sort of wink if you will no I I don't think it's that way at all I think there is just a lot of probabilistic resources and weird things will happen if you have enough opportunities for them to happen sure uh I think we're closer to evidence for digital nature of simulation when we look at quantum physics the latest experiments in terms of again Observer effects and how having observers modifies uh what is produced in the experiment that's to me a lot closer to
(29:07) what we see in video games and Virtual Worlds sure well while we're on the subject of that talking about the simulation maybe we simply exist within it maybe we could Escape it maybe we can influence it to some capacity but are we the players or are we just the NPCs that the players interact with are there no players at all is it all completely automated what are your thoughts those are wonderful questions uh people have been trying to figure it out using you know theological language forever so you are your avatar your body you have some
(29:44) basic scripts programmed in you get hungry whatnot but it seems that the heart problem of Consciousness indicates something outside of your material brain where there is a player logging into the system to experience parts of the environment we have no way of testing for it just yet H would it be possible cuz right now we're just talking about the individual experience and the individual perspective and our sole individual Consciousness would it be possible that we are a sort of collective experience yeah so one possible use for
(30:21) simulation service type is qualia mining I have limited resources I want to experience the best possible experience es I don't have time to do it I create 8 billion agents they all go to different parties do different things and then somehow I figure out what was the best party to go to and I try that experience on so in a way it's me trying all 8 billion things but I concentrate on certain ones at a time are there any other Technologies I know obviously AI is something that is Incorporated in almost all areas of life or it will be
(30:55) here soon are there any other technologies that you are looking forward to the advancement of cryptocurrencies are doing really well Bitcoin all right that's a good one that's a very good example okay I I bring this up one of my one of the questions that somebody had asked uh I'll bring up a couple more of these is how long until we have the hollow deck from Star Trek I never seen Star Trek in my life so if you give me specific details of what it does I would be able to better answer I have I have it all pulled up
(31:29) here right now actually a hollow deck is a fictional technology from Star Trek that creates highly realistic immersive virtual environments using Holograms force fields and matter replication it allows users to interact with simulated environments as if they were real including Landscapes characters and objects users can experience anything from historical Recreations to entirely new imaginative scenarios so I think quality video quality is getting to the point where it's same resolution as human visual system that's the upper limit so Apple
(32:04) products we always talk about rat a level pixels uh so I think we got there in that regard we're still working on haptics for your body to physically perceive a lot of the movement and punches and whatnot the hardest part is switching your brain to where it doesn't remember entering the simulation the moment we can do that even crappy 8bit simulation would be real to you you don't know what the better one is right if we can turn off memory of high quality one and just tell you this is what you got you're playing pong in this
(32:36) world that's you that that would be the level so we need to forget that we are in one and uh we're pretty close is is that the ultimate goal then is is to just have the flip flip the switch of immersion and just you're there well what the goal is it depends on who's doing it and why I'm trying to answer the question question is how soon can we have realistic simulations and I think that's in terms of Graphics in terms of sound in many ways we at the level where you can't tell you put it on and you're
(33:09) like okay I'm in Hawaii this is real like I see the sunset but you still remember that it's just it's like lucid dreaming you kind of know that you're in a dream but the moment you can turn that off it's like it's a nightmare you can't escape that dream until through some means it's switched off yeah in essence it's almost asking when will we have the ability to make a simulation ourselves which we have no idea when that'll be CU that's almost what it is is is creating
(33:36) a simulation especially if that is the ultimate goal is to uh just flip the switch of the immersion and you forget which we we may have this the flip switched already and we're we're forgetting that you know we our character kind is doing it now people have created Virtual Worlds populated by large language models to study how they interact communication economy in many cases they are not aware that they are AI in a simulation there are some funny examples where people interact those characters interact with them and reveal
(34:11) the information to them and they go like oh really how much experience do you have with VR I tried using it it was too time consuming I quit yeah um I wanted to ask can AI be used to replace a government it depends on how much you believe that government actually does something in need of doing what is your view on the Potential Threat of AI coupled with Nanobots so as I mentioned uh synthetic bio nanot Tech all of those are very powerful Technologies they've developed to a certain level so AI can definitely
(34:51) use that as a way to Impact Physical World humanoid body may not be the ultimate way to do that so some sort of bacteria biological entities biological robots or Nanobots could be greatly Superior in that regard comments here I miss the option it's all a bunch of nonsense and there's nothing actually intellig intelligent about AI would you agree with that that there is nothing intelligent about modern models some people believe that AI won't actually get to the AGI point I believe is what he's some people think we
(35:34) already got to it that's a good point better than if you take PL uh random human the system today speaks 100 more languages plays 100 more instruments I mean more but I'm limiting it to 100 in any domain of knowledge politics history psychology chemistry it would dominate an average human I'm not sure how that argument is made today then we started this research 10 years ago people were saying things like that and maybe they had a reasonable chance of being right 5% or something today it's silly AI is not even that
(36:12) advanced we need to Triple down on it and accelerate it but obviously regulate it just in case always just in case so acceleration is interesting we are hearing that we are 2 3 years away from this we have absolutely no safety plan no regulation and they want to accelerate meaning it will be 2 months or 2 weeks I don't follow that logic at all the bad part is the regulation the regulation is for us not them it's to regulate what product comes to us if we had unfettered access to the same AI as them we could counter their actions with
(36:48) our own tools again we're not worried about tools here we're talking about creating agents super intelligent agents and it doesn't matter who creates them if you're not controlling it malevolent super intelligence made by American company or Chinese government or whatever is the same well I think this this comment uh exemplifies it perfectly accelerate the AI so we could have the future that we used to think about I don't care about the risks I want a flying car we have flying cars you can
(37:20) buy it right now Google it I wanted to get your your thoughts on um obviously my audience is a little bit obviously we we have an interest in Ai and the the consequences that may come from it but I wanted to to get your thoughts on some things outside of AI as far as the conspiratorial realm do you believe that there is some sort of Shadow government behind the the scenes or is it just literally no rulers or is the billionaires in charge who do you think runs the world you're trying to get me in trouble aren't you you you can say I
(37:52) don't want to answer it's up to you so it depends if we go with what we discussed before we're a simulation right so then you know who's running it the simulators are we know nothing about them you can look at what is happening is kind of this is what the simulation is about this is the type of people who are probably main characters in that simulation that information is well known as to politics I think it's kind of boring I don't think it's worth the time given the real problems we have
(38:25) have you ever seen any unidentified aerial objects aerial phenomena UFOs or anything not to my knowledge no no okay do do you believe that they exist it would make sense statistically to have you know other civilizations in a universe that big but I don't think we captured them have alien bodies or anything like that the impact would be much greater we haven't seen any change in technology or anything as a result so you don't believe that we we have any sort of suppressed technology being intentionally held back or
(39:00) anything I mean if they suppressed it how would I see it true that's a good point you don't think it's even possible well everything's possible I mean but the government is not particularly good at keeping secrets indefinitely I mean Snowden would tell us Trump would definitely tell us if he had aliance in a basement that's that's a pretty good point that's a pretty good point do you believe that there are what are your thoughts really on the different layers of reality as far as
(39:33) beings that we can't interact with because we're we're obviously only able to perceive so much of reality around us as far as aliens paranormal demons ghosts archons whatever do you believe that there are other layers of reality around us that we we can't perceive I have zero knowledge in that regard I think too many people share opinions and the means they know nothing about I will try to concentrate on what I know something about very intelligent I love it okay well Dr Roman I think I have all
(40:09) the questions asked that I have pulled out is there anything else you'd like to talk about no you covered it well perfect let's solve it now you got your minions let's go and fix it yeah absolutely I I do believe that that is arguably the single most important thing that we can do is to just share this because yeah as I said and as you've stated plenty of times that this is arguably the biggest existential threat to us at at our uh in our in our time but yeah I I I suppose that's all I got um Dr Roman I want to
(40:42) thank you so much for your time today thank you so much look forward to seeing final result awesome thanks Dr Roman we'll talk to you soon



COUNTRY SONG SO GOOD!!!

Magical Systems List

Magical Systems: A Comprehensive List

Below is a categorized list of various magical systems, both historical and modern. Click on the colorful name of each system to learn more via its Wikipedia page or official website.

Ancient Mediterranean and Near Eastern Systems

Medieval and Renaissance Systems

Modern Western Esoteric Systems

Indigenous Shamanic Systems

Eastern Traditions

African/Diaspora Systems

Divinatory Systems

Ai Map/wiki

Technology & AI
Artificial Intelligence
Machine Learning
Neural Networks
AI in Art & Creativity
AI in Healthcare
AI in Business
AI & Consciousness
Robotics
Singularity
Transhumanism
Future of Technology
AI Ethics

map of the esoteric

Esotericism Mind Map Exploring the Vast World of Esotericism Esotericism, often shrouded in mystery and intrigue, encompasses a wide array of spiritual and philosophical traditions that seek to delve into the hidden knowledge and deeper meanings of existence. It's a journey of self-discovery, spiritual growth, and the exploration of the interconnectedness of all things. This mind map offers a glimpse into the vast landscape of esotericism, highlighting some of its major branches and key concepts. From Western traditions like Hermeticism and Kabbalah to Eastern philosophies like Hinduism and Taoism, each path offers unique insights and practices for those seeking a deeper understanding of themselves and the universe. Whether you're drawn to the symbolism of alchemy, the mystical teachings of Gnosticism, or the transformative practices of yoga and meditation, esotericism invites you to embark on a journey of exploration and self-discovery. It's a path that encourages questioning, critical thinking, and direct personal experience, ultimately leading to a greater sense of meaning, purpose, and connection to the world around us.

Jeffrey Epsteins Little Black Book Unredacted

PostsOfTheYeer

INTERESTORNADO

INTERESTORNADO
Michael's Interests
Esotericism & Spirituality
Technology & Futurism
Culture & Theories
Creative Pursuits
Hermeticism
Artificial Intelligence
Mythology
YouTube
Tarot
AI Art
Mystery Schools
Music Production
The Singularity
YouTube Content Creation
Songwriting
Futurism
Flat Earth
Archivist
Sci-Fi
Conspiracy Theory/Truth Movement
Simulation Theory
Holographic Universe
Alternate History
Jewish Mysticism
Gnosticism
Google/Alphabet
Moonshots
Algorithmicism/Rhyme Poetics
"In the dance of stars and symbols, the universe whispers secrets only the heart can decode. Embrace the mystery, for within it lies the magic of infinite possibility."


"a mystery permitted of these substances towards any tangible recognition instrument within extreme preeminent & quantifiable utilization qualia visual"- GeminiCool

* Notic

The majority of content used in videos is licensed by our partners. We may use third-party material when its usage of it falls under the Fair Use legal doctrine. If you are the legal content owner of any content we used on the channel and would like to remove it, we gladly will resolve your problem. ✅ Fair Use Disclaimer 1. The videos have no negative impact on the original works. 2. The videos we make are used for educational purposes. 3. The videos are transformative in nature. 4. We use only the audio component and tiny pieces of video footage, only if it's necessary. This video features materials protected by the Fair Use guidelines of Section 107 of the Copyright Act. All rights are reserved to the copyright owners.