Query OpenAI for analysis of data.
Paramaters:
PrePrompt
- Is System prompt default is: “You are a Cyber Incident
Responder and need to analyze data. You have an eye for detail and like to use
short precise technical language. Analyze the following data and provide
summary analysis:”Prompt
- Is User prompt as string. When pushing a dict object via PromtData
good practice is add some strings related to the type of data for analysis or
artifact name to provide context.PromptData
- add optional object to be serialized and added to the User prompt.Model
- Model to use for your request.MaxPromptSize
- If set will cut the final prompt to this size in bytes to
assist maintaining context limitsThis artifact can be called from within another artifact (such as one looking for files) to enrich the data made available by that artifact.
name: Server.Enrichment.AI.OpenAI
author: Matt Green - @mgreen27 refactored orginal from Wes Lambert - @therealwlambert|@weslambert@infosec.exchange
description: |
Query OpenAI for analysis of data.
Paramaters:
* `PrePrompt` - Is System prompt default is: "You are a Cyber Incident
Responder and need to analyze data. You have an eye for detail and like to use
short precise technical language. Analyze the following data and provide
summary analysis:"
* `Prompt` - Is User prompt as string. When pushing a dict object via PromtData
good practice is add some strings related to the type of data for analysis or
artifact name to provide context.
* `PromptData` - add optional object to be serialized and added to the User prompt.
* `Model` - Model to use for your request.
* `MaxPromptSize` - If set will cut the final prompt to this size in bytes to
assist maintaining context limits
This artifact can be called from within another artifact (such as one looking
for files) to enrich the data made available by that artifact.
type: SERVER
parameters:
- name: PrePrompt
type: string
description: |
A prefix to be used with the prompt. For example, when asking
a question, then providing data separately
default: |
You are a Cyber Incident responder and need to analyse forensic
collections. You have an eye for detail and like to use short precise
technical language. Your PRIMARY goal is to analyse the following data
and provide summary analysis -
- name: Prompt
type: string
description: The data sent to OpenAI
default: "what is prefetch?"
- name: PromptData
type: string
description: The data sent to OpenAI - this data is serialised and added to the prompt
- name: Model
type: string
description: The model used for processing the prompt
default: gpt-4o
- name: OpenAIToken
type: string
description: Token for OpenAI. Leave blank here if using server metadata store.
- name: MaxPromptSize
type: int
description: Will limit your prompt to this size in bytes. Helps maintain context sizes.
sources:
- query: |
LET Creds <= if(
condition=OpenAIToken,
then=OpenAIToken,
else=server_metadata().OpenAIToken)
LET UserPrompt = if(condition= MaxPromptSize,
then = (Prompt + " " + serialize(item=PromptData))[:MaxPromptSize],
else = Prompt + " " + serialize(item=PromptData) )
LET messages = (dict(role="system",content=PrePrompt), dict(role='user',content=UserPrompt))
LET headers = dict(`Authorization`='Bearer ' + Creds, `Content-Type`="application/json")
SELECT
PrePrompt as SystemPrompt,
UserPrompt,
parse_json(data=Content).choices[0].message.content AS ResponseText,
parse_json(data=Content) AS ResponseDetails
FROM http_client(
url='https://api.openai.com/v1/chat/completions',
headers=dict(`Authorization`='Bearer ' + Creds, `Content-Type`="application/json"),
method="POST",
data=dict(model=Model, messages=messages)
)