Jump to content

Rust GPT 1.7.7

   (5 reviews)

4 Screenshots

  • 42.2k
  • 1.1k
  • 41.39 kB
This area is intended for discussion and questions. Please use the support area for reporting issues or getting help.

Recommended Comments



papi

Posted

The Death Notes integration is way cool. Never seen anything like this before.

Is there a way to not have to use /askgpt in chat so all chat prompts GPT?  

  • Like 1
Covfefe

Posted

  On 5/4/2023 at 2:48 AM, GOO_ said:

In your config file  change this line 

"Question Pattern": "!gpt",

To this
 

"Question Pattern": "",

Now as long as the player typing in chat has the permission RustGPT.chat everything they type in chat will be answered by RustGPT. If you want you can do this...
 

"Question Pattern": "(who|what|when|where|why|how|is|are|am|do|does|did|can|could|will|would|should|which|whom).*?$",

This way anyone that asks a properly formed question will be answered by ChatGPT. For example "When is wipe?" will get a response. "When is wipe." will not get a response.

Expand  

How do i set it so the question mark ? is part of the |?|

 

I tried this but it couldn't compile 

  Quote

(who|anyone|what|when|where|why|how|is|are|am|do|does|did|can|could|will|would|should|which|whom|?).*$

Expand  

 

HaKaToMu

Posted

plugin doesn't work

Dantearconte2

Posted

Hello!!  if i have chatGPT 4.0 can i change that in "Model": "gpt-3.5-turbo", if yes how i need to write that? thanks for read me!

Covfefe

Posted (edited)

can we shorten the Death Notes replies? Sometimes it's like 3 whole paragraphs.

Edited by Covfefe
  • Like 1
  • Haha 1
Vodu

Posted

Okay, now I am scared .. just wired this thing up and OMG what a game changer .. blowing my mind as I type. Couple of things though and not sure how to go about doing this. With the Death Notes integration any chance that it could use a different profile (read name) to send to the server? How to send through context of the date and time of the question i.e. many people ask when the next wipe is, would be nice to give the ai some time context without it having to be specified in the question (its kind of dumb after all).

 

Wow so many possibilities .. I also have Discord Death logging to due Discord would love to have the ai response sent there rather than the Death notes. I have configured the ai to commentate as a British comedian .. just genius love it!!

Vodu

Posted (edited)

  On 8/6/2023 at 5:56 PM, Covfefe said:

can we shorten the Death Notes replies? Sometimes it's like 3 whole paragraphs.

Expand  

Just ask it to respond in one sentence .. easy .. done. For example

"Death Notes GPT Prompt": "You are a British risque comedian commentating on the hottest new battle royale deathmatch. You can use markdown in your responses. You should restrict the commentary to one sentence. Ensure that the players name is referenced in the commentary.",

Edited by Vodu
  • Like 1
Vodu

Posted

  On 5/4/2023 at 2:48 AM, GOO_ said:

In your config file  change this line 

"Question Pattern": "!gpt",

To this
 

"Question Pattern": "",

Now as long as the player typing in chat has the permission RustGPT.chat everything they type in chat will be answered by RustGPT. If you want you can do this...
 

"Question Pattern": "(who|what|when|where|why|how|is|are|am|do|does|did|can|could|will|would|should|which|whom).*?$",

This way anyone that asks a properly formed question will be answered by ChatGPT. For example "When is wipe?" will get a response. "When is wipe." will not get a response.

Expand  

I found this a bit hit and miss TBH .. the AI responded even without a question mark at the end .. so I used the AI to come up with this instead ..

"Question Pattern": "^(?=.*\\b(?:Who|What|Where|When|Why|How|Is|Are|Am|Do|Does|Did|Will|Shall|Can|Could|Should|Would|May|Might)\\b).+\\?$",

  • Like 1
Covfefe

Posted

  On 9/19/2023 at 7:46 PM, Vodu said:

I found this a bit hit and miss TBH .. the AI responded even without a question mark at the end .. so I used the AI to come up with this instead ..

"Question Pattern": "^(?=.*\\b(?:Who|What|Where|When|Why|How|Is|Are|Am|Do|Does|Did|Will|Shall|Can|Could|Should|Would|May|Might)\\b).+\\?$",

Expand  

did it work?

 

Vodu

Posted

  On 9/20/2023 at 2:49 AM, Covfefe said:

did it work?

 

Expand  

Yup .. perfectly!!

  • Like 1
McPhee

Posted

It goes over the character limit quite a lot, can we make it spread its answers out over 2 or 3 messages, or whatever is required? 

Cabra

Posted

Hello, very good plugin, can you fix the fact that when the accents are missing in Spanish, or when it is the "ñ" it does not put the characters??
Greetings machine

Covfefe

Posted

I’m getting a lot of question marks ???? When it tries to reply in other languages

Cabra

Posted

Captura de pantalla 2023-10-23 115647.png

GooberGrape

Posted

could you have a option to work with this death message also please

 

Vodu

Posted

Something has broken with this plugin .. even with death notices disabled it still announces them plus doesn't seem to implement the correct body part as it keeps referencing -1

papi

Posted

GOO_ will you add text to voice to the API?

Tbird412

Posted

What do all of these do?
 

  "AIResponseParameters": {
    "Frequency Penalty": 0.0,
    "Max Tokens": 150,
    "Model": "gpt-3.5-turbo",
    "Presence Penalty": 0.6,
    "Temperature": 0.9

 

  • Like 1
Tbird412

Posted

Ironically I asked ChatGPT what those do and it gave me great answers haha

  • Haha 1
Tbird412

Posted

Is there a way to use an assistant model rather than the default ones

GOO_

Posted

  On 2/14/2024 at 1:49 PM, Tbird412 said:

Is there a way to use an assistant model rather than the default ones

Expand  

I don't know yet. I'm sure you can but I haven't had time to play with the assistants yet. They require their own files so I have to play around with them. 

GOO_

Posted

  On 1/20/2024 at 11:08 AM, papi said:

GOO_ will you add text to voice to the API?

Expand  

I tried doing this a few months ago and it was way too janky. I'll try again though. I need someone smarter than myself to help me out with that. 

Tbird412

Posted

So a few things ....

1) I'd flip that to 3.5 if I were you.  Go look at the cost difference it is insane.  I think our servers ran up about $0.06 in one day at the worst, then I tried gpt-4 in hopes it would fix the issue I was having and in one day of using that it ran up to like $4.62 with LESS usage than the day before.  It was insane

2) I had to do a LOT of code tweaking.  In it's default form (downloaded from here) it was not showing any kills.  I forgot what the original criteria were (I have already customized my kill criteria) and I am not here to insult the author of any plugin so I am choosing my words carefully but I think the issue I had found was something like a "if victim is NOT an npc and killer is NOT an npc" or something I honestly don't remember what it was in original form.  But I altered mine some and it is showing all the deaths fine now.  Just cannot stop it from the giant paragraph.

I am trying to get the new version up to speed (by that I mean migrating over my changes from the last version to this new version) but I aborted that migration because of the paragraph thing.  I am new to this AI stuff but I cannot figure out for the life in me why it is sending a huge paragraph for this version but the previous version (same settings) is not.

......... ok yeah I just re-downloaded the original to see.  But yes, this line is stopping it from doing death notes if there is an NPC involved (victim or killer).  Most servers that run any sort of death notes are PVE because on PVP servers those types of plugins can be frowned upon....

if (victim != null && !(victim is NPCPlayer) && attacker != null && !(attacker is NPCPlayer))

.............. as for my issue with paragraphs.  I just don't get it.  Old version, fine, new version, huge paragraph each death.  Same exact settings and prompt phrases.

Some of the other features I added to my copy (if the dev here is interested in stealing the code I am all for sharing) I added a few features:

- I changed the cooldown to a person by person basis.  But it is also dynamic (explained below)
- I changed server broadcast to iterate through the online players and send directly rather than just broadcast to the entire server.  This was done to enable these next two options.
- I added a command /DMtoggle for players that do not want to see the death messages (you know how some players call everything "spammy" even if it is only every now and then)
- I added a command /QAtoggle for players to turn off seeing the AI answer questions in chat.  Again, some real veteran players get annoyed by it because they "know everything" lol
- I added a permission "blocked" for players that don't want the bot to answer their questions (we use a very global set of question triggers not just a command).  This is also mainly for the following ...
- I added some playtime based functionality since we are using this bot to answer questions of new players, and our veteran players are not very interested in it all.   I assigned the blocked permission to our higher "playtime" oxide groups so they get ignored.  
- I then added the logic that if the player does NOT have the blocked permission (so they are pretty new) it performs as normal.  But if they DO have the blocked permission (they are not new) it completely ignores anything they ask unless they purposely include the command keyword for the AI bot (so in other words if you are new, he answers everything you might ask, if you are not new you must trigger him on purpose)
- I got rid of the "chunks" logic in mine.  Sorry but my entire point is to keep the answers short and not spammy.  If I ever have answers big enough to need chunks, then I am doing something wrong.  So to save on unnecessary logic (especially since I already iterate through the players for broadcasting, which means I would need to nest a foreach statement for the chunks inside of the foreach for the players) I just flat out removed that logic.

So far all of that has been delightful and the players love it.

The only two issues we are facing now:
1) Cannot figure out this issue with the response being a massive paragraph only in the new version
2) I cannot figure out how to code it to use my OpenAI assistant rather than the default models.

 

  • Like 1
Tbird412

Posted

Proof about the costs (the darker green is gpt-4, the lighter is gpt-3.5, ignore the fuscia that is our fine tuning models).  Also attaching our tokens bargraph so you can see we used MUCH LESS tokens on those days that the costs went through the roof ...
image.png.b8e10c959cd0e03062e625c22e3de4e4.png

image.png.d645023f89a8d23bf2ce079628e96f74.png

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Like 1
  • Love 3

GOO_'s Collection

User Feedback

1.6m

Downloads

Total number of downloads.

7.7k

Customers

Total customers served.

115.3k

Files Sold

Total number of files sold.

2.3m

Payments Processed

Total payments processed.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.