AI simulation of RA

I would like to share here the beginnings of a project I am working on. There has been major advancements in massive natural language processing models in the last few years. This has resulted in an expansion of research and development in this area, with much of it being developed in an open source manner.

I think that this opens up the possibility of a new way to study The RA Material. By building a simulation of RA that one can converse with perhaps new insights into the material can be gained.

Below I quote one of the better “conversations” I have had with an initial attempt at this simulation. The questions are provided by me and the answers by the language model.

This is a rather basic attempt and also uses some pretrained language models for which the code is not open source. The next attempt that I will be making will require substantially more work are will be based entirely on open source gpt models from EleutherAI. A grassroots collective of researchers working to open source AI research. These models will be fine tuned using The Ra Material and I expect will provide considerably deeper responses then the below example.

Ultimately I would like to make this model open source and to have it hosted on a server so that any one who would like to can use the simulation and save the output. It would be a straightforward user interface similar to talking with a chat bot. I have significant experience with neural networks having first worked with them over 10 years ago. However, web based deployment of this type of thing is not something I have done before. I would appreciate if anyone has advice or resources on how I would approach this part of the project.

I would also appreciate any opinions about the project in general. Is this something you would like to use if it was available? Do you think it will be a useful tool? Is this a terrible idea? All feedback welcome

I plan on documenting the development of this project on this forum so stay tuned for more updates!

3 Likes

That is a very interesting proposal. I would definitely check it out for curiousity’s sake alone. I don’t know much about AI, so I wonder how AI could process and articulate this information in a way that the human brain couldn’t? I found the below quotes especially interesting.

I cannot fathom how a computer could put that together on its own.

This below tidbit, I thought was also good (talking about Wanderers).

I imagine that like with most things, the significance will be the impression it leaves on the consciousness of the person reading it. I can see it being used as a tool. Used incorrectly may have some negative effects though. You might need a disclaimer.

3 Likes

I love this idea. I experimented with a Ra chatbot using GPT3 and found it to really work well. An easier approach, I think, would be to create a fine-tuned model via GPT3. Though using GPT3 would cost some money monthly, using a fine-tuned model would be fairly inexpensive (of course this depends on how much use it gets as they charge per API call).

In terms of web deployment, it’s fairly straightforward. GPT3 works via API calls so all you have to do is call it via their API.

I have some experience with NLP models as I co-founded storyprism.io which is a set of AI-based tools for screenwriters. One of our tools (character builder) allows a user to build a chatbot that’s designed to answer questions as their characters.

Would love to talk more about this project as it is something ive been interested in as well for a while!

2 Likes

I have also played around with GPT3 for this (not fine tuned) and was also pretty impressed. I agree it would be easier to implement than GPT-J6B or GPT-NeoX-20B. Also considering the new OPT-175B from meta but I haven’t really looked into the performance of that one at all yet. GPT3 is a phenomenal model but for this type of project I am fairly committed to the model being open sourced. I don’t have any issues at all using paywalled models for commercial/personal work but I think when it comes to the spiritual my personal ethics tell me the model should be open source and not paywall anyone out.

Wow this is really cool! I assume you are using GPT3 DaVinci? Or are you using one of the smaller models? No worries if you can’t answer if this is proprietary.

Me too! I have had this on my want to do list for ages and am finally starting to make it happen. Do you have any experience with the EleutherAI or other open source models?

The way I like to describe this it is the model is forming connections between different words and concepts (clusters of words). At the ground level this is similar to the way the human brain makes connections but it is also fundamentally different in structure and perception so it will makes connections that the human brain will miss and this can seem insightful. But it will also miss some connections that our brains might find very obvious (although this type of shortcoming is reducing in these very large scale models). It is the differences in perspective that I believe the language model can supplement the human understanding.

This is my reaction too, and I hope it is for others. To have an interactive way to probe the material I think could be very enlightening.

Could you elaborate on the incorrect use please? I must admit that I may have a tendency to get caught up with the what I can do and maybe not consider if I should do.

Well, I must be totally underestimating the intelligence part of artificial intelligence. That sounds impressive! So the AI would make connections we wouldn’t, so perhaps this does offer a valuable new perspective to analyse the material from.

If anyone had any burning desire to ask questions of Ra, they could give it a shot with this AI - with the understanding that they’re not actually talking to Ra, of course. There’s a whole bunch of follow up questions I really wish Don asked, some in which Ra provides a lead in, but Don leaves it (not that I blame him, he was doing it in real time). I should compile a list of questions next time I go through the material.

Also, there is an extensive library of conscious channelling if you wanted to expand the data feeding the AI (if that’s even possible). I don’t think the language would be as conducive to straight answers, but as the conscious channelling collection is already itself unwieldy, and difficult for an individual to access with much breadth, a project like this could also prove useful.

I would at the very least be entertained. Many seekers ask questions about the LOO, it would be super interesting to compare human answers to AI answers. Will they beat us in the quality of the answer? Will they prove superior and worthy to be our overlords? … And onto your next question…

So, what comes to mind is:

  1. People approach it as though the AI is Ra. It’s not easy for us humans to properly weigh the relevance and significance of AI analysis. I agree it has some value, but how much value? Difficult to assess. Those that totally disregard it, no problem there. But what if someone really believed it to be some definitive divine source of Truth? Freewill and all, but would you want to clarify to that person that it’s only a machine (albeit an impressive one).

  2. It is used for divination or communication with non-corporeal entities. If we work with the premise that magic is real, electronics in particular can be sensitive to supernatural phenomena. I think of the Global Consciousness Project and EVP. Even Ra said, if I remember correctly, that the recording device was acting up due to Carla’s energy. So could we have a “ghost in the machine” show up? And would that necessarily be a bad thing? I’m not sure though, just speculating on it.

1 Like

Oh yea I definitely wasn’t suggesting having it behind a paywall - just that to get something out there quickly it would be easier to fine-tune using gpt3. Paying for it, well that would probably have to be donation-based - however, fine-tuned models are fairly inexpensive through openai.

We use fine-tuned models through gpt3 for our apps, however, we will be switching to using open-sourced models, fine-tuned for our use case sometime in the future.

I don’t personally have any experience yet using open-sourced models but I know a little about how the fine-tuning process works for them. From what I understand, it is time intensive to fine-tune a model like GPT-J or NeoX- something like 2 weeks of training time on a high-powered GPU which would be several hundred dollars to rent unless someone has one they could donate :smiley: Hosting would require it to live on a GPU server as well which also costs $ monthly. The newer ones like BLOOM would likely be a lot easier to train, however, hosting would be a concern.

You could host via a pro-Google Colab account, however, the issue with that is it shuts down every few hours if you aren’t actively using your computer so it would be fairly unreliable. You could trick it to stay on but unsure how effective that is.

So I guess my hesitation with using open-sourced models is really the cost to train/host them. Renting GPU time can get costly - if anyone has GPUs they could lend us, then that problem goes away and it’s effectively free outside of electricity cost.

Sorry, but i regard an AI only as an thought form of the programmer that has written it.

But it is an interesting experiment.

I can honestly say the thought made me immediately uncomfortable. I think some aspects of technology should not mix with spirituality.

2 Likes

I agree and specially the word artificial intelligence makes clear that everything (specially the result) is not natural and only an product of brains and EGO’s.

AI is to be seen as an tool like every other computer and programming method, that can be helpful and useful or can be abused.

2 Likes

I take this approach with it as well. In a spiritual context, I don’t see it being much different to other tools such as a pendulum or a tarot deck. The tarot deck holds graphical depictions of spiritual concept-complexes which are ordered and re-ordered with the random shuffling and laying out into patterns that are then interpreted by the conscious being. The AI has had its database infused with human syntax which maps out spiritual concept-complexes. The unique line of questioning from the human user then yields personalised results.

Although I understand the negative connotations to using computer programming as a tool of learning in this way. Using more organic mediums which make us feel closer to nature has its place to aid in the correct tuning and mindset of the person yielding the tool. But I also think that as we move further into our technological and data driven world, that culture shifts and adaptations are bound to occur, and there will be people interested.

I think I am leaning towards my first finetuning attempt to be using hugging face transformers with GPT-J on google colab. This process looks relatively straightforward, and should be a good learning experience for me using these very large models. Although I have a lot of experience with ML in general I have not had experience using these enormous language models and there is likely some intricacies that I will need to learn by doing.

The performance on GPT-NeoX is certainly better, and I would like to get up to models of this size eventually. But the size of the model means the minimum requirement is multiple GPUs and being such a new model there is less a lot less documentation around to help me out on the process. I am not too concerned with paying for GPU time as a one off basis for training but hosting is another story. I am still not sure how best to manage/cost that out.

Yeah google colab for hosting seems like it wont work as even pro+ accounts have a 24 hour max runtime on instances. Also yes if there is anyone out there willing to lend some GPUs we will make good use of them :slight_smile:

This is a use case that I am also interested in. There are many follow up questions which I think could be useful to try on the AI. What I have made bold there is probably the biggest concern I have had with this project. It can be difficult for people not not feel like they are talking to real people when conversing with chatbot’s that are based on GPT-3. I worry that people may take the output of the model to be something that Ra is actually saying. I would certainly have to make this distinction. This is a simulation/approximation of what Ra might say, nothing more.

This certainly can be done. However, I would consider the addition of this material to be quite a different goal, not necessarily a Ra simulation but a Confederation simulation.

Yes, yes I agree that is what I am most concerned about.

This reminds me of something Donald Hoffman said. He promotes an interesting philosophy about the nature of reality, part of that philosophy is that out perception of reality is a type of interface to a greater more complex reality. Like a UI on a computer system. When asked about AI he said (paraphrasing from memory here) that he thinks the creation of AI would allow for new types of consciousness to interface with reality. A somewhat creepy possibility. However, this does not dissuade me from the attempt. It is know that such interference is possible in channelling efforts also, and that does not mean that these should not b attempted as a meaning of seeking the truth. Perhaps some rituals such as those used in The Ra Material should be employed in its use. Certainly something to consider.

I appreciate this concern, and was somewhat expecting that some individuals would have this response. Does the fact that it is open of public use concern you, or even the development of a model for private use only also make you uncomfortable?

I actually with this notion to a large degree. But I feel the lines begin to blur a little when it comes to machine learning. In some way the programmer’s thought form is embedded in the structure of the model. But one the model is given data to train on it basically begins to program itself in an unsupervised manner, and in the untrained state it is completely useless, only after the learning does it gain meaning.

Who if anyone is responsible for the code written during this process? I personally think that it is the model itself, and in this way I think it could be possible that a computer model could produce its own thought forms given significant complexity.

It’s just the fact of trying to predict Ra’s words with artificial intelligence. That is the part that bothers me. The rest as an experiment makes sense, as how else are we going to learn if we don’t try things out?

The main question is if the program is able to build own programming/algorithms, or is only collecting data and works with it in an already programmed manner?
But even if the source code can modify itself, then still no soul or real creativity has been created.
This can be clearly seen in programs that compose music, for example. The created music can imitate and copy existing elements, but in the end remains simply soulless, not able to express any feelings, because they don’t exist.

@Scott
I am curious as to how the program works. Does it have a random function to it? For example, I have found that when asking higher self or source for information that doesn’t require a complex answer or need for meditation, I will have a list of items such as songs and then I will close my eyes while randomly selecting something. I find that when a question is asked first then the song is randomly selected, I mysteriously get an answer by the song title or something said within the lyrics.

I tested this theory thinking that everything including a computer is the infinite creator therefore we are continuously have a conversation with it.

My other thought is if your program is something similar to the robot Sofia where there is a continuous learning element. A child cannot understand physics out of the womb but must evolve in its understanding. Is your program similar or is it more complex?

Thank you for sharing you concerns I appreciate all points of view.

The program is trained (when I say trained this is the part of the process where the code alters itself) on a enormous array of written language data. This gives it a basic general skill to understand language in many contexts. The same model is then trained again on The Ra Material where the code will alter itself once again to be more specifically skilled at responding to questions in the manner Ra would.

I think that it is important remember the point you have made regarding a lack of emotion, and that it is not really Ra that can respond. These models can provide a conversational experience that seems emotive but it is really not at all emotional in the way we experience emotions. I do disagree to an extent about there being a lack of creativity, I think that some of the responses I quoted in the OP are creative and are not simply a regurgitation of the material.