Integrate Chatgpt in Power Automate and get search result via mail

Integrate Chatgpt in Power Automate and get search result via mail

Chatgpt is a recently launched Open AI-powered chatbot. In this tutorial, we will connect Chatgpt to Power Automate and display the final result on mail.

1.Generate the Chatgpt API key

Follow the below steps to generate your “API key” to use in the Power Automate flow.

Open the following URL and Sign-up

https://beta.openai.com/account/api-keys

After the sign-up, you will see the following screen.

No alt text provided for this image

Click on the “API keys” and click on the “Create new secret key” button.

A new key will be generated.

Save it.

No alt text provided for this image

2.Create Power Automate flow

Create an “Manual Trigger” Power Automate flow.

No alt text provided for this image

Select manual trigger.

No alt text provided for this image

After rename, select “add an input”.

No alt text provided for this image

Here, in the first dialog box, input 'Chatgpt Input Dialog'. In the second box, input 'Ask Chatgpt something'. You can provide any name you wish in these boxes.

No alt text provided for this image

Add HTTP action in next step.

No alt text provided for this image

In next step set

Method: POST

To obtain the correct URL from Chatgpt, please use the following link

https://platform.openai.com/docs/api-reference/completions

When you click on this link, you will land on the following page. Then, select the URL highlighted in the box.

No alt text provided for this image

Headers

No alt text provided for this image


No alt text provided for this image

Body:

{

"model": "text-davinci-003",

"prompt": "ChatGpt Input",

"max_tokens": 500,

"temperature": 0,

"n": 1,

"stream": false,

"logprobs": null,

"stop": null

}


Before going further, have a look into a few important points about Chatgpt:

Models

The OpenAI API is powered by a diverse set of models with different capabilities and price points.GPT-4 is latest and most powerful model.

text-davinci-003(Model Id): Can do any language task with better quality, longer output, and consistent instruction-following than the curie, babbage, or ada models. Also supports inserting completions within text.

To learn more about these models:

https://platform.openai.com/docs/models/overview

Prompts

Designing your prompt is essentially how you “program” the model, usually by providing some instructions or a few examples.

To learn more about these Prompts:

https://platform.openai.com/docs/guides/completion/prompt-design

Tokens

Tokens can be words or just chunks of characters. For example, the word “achievement” gets broken up into the 3 tokens “ach”, “ieve” and “ment”, while a short and common word like “rose” is a single token.

The number of tokens processed in a given API request depends on the length of both your inputs and outputs. As a rough rule of thumb, 1 token is approximately 4 characters or 0.75 words for English text.

so 100 tokens ~= 75 words

Check out our tokenizer tool to learn more about how text translates to tokens:

https://platform.openai.com/tokenizer


Request body

1-Model(String)

ID of the model to use.?

2-Prompt(string or array-Defaults to <|endoftext|>)

The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays.

Note: <|endoftext|> is the document separator that the model sees during training, so if a prompt is not specified the model will generate as if from the beginning of a new document.

3-max_tokens(integer-Defaults to 16)

The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096)

4-temperature (Number - Defaults to 1)

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

recommend altering this or top_p but not both.

5-top_p (Number - Defaults to 1)

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

recommend altering this or temperature but not both.

6-n (integer-Defaults to 1)

How many completions to generate for each prompt.

Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop.

7-stream(boolean-Defaults to false)

Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.

8-logprobs(integer-Defaults to null)

Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response.

The maximum value for logprobs is 5.

9-stop(string or array-Defaults to null)

Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.

To learn more:

https://platform.openai.com/docs/api-reference/completions/create


Body:

{

"model": "text-davinci-003",

"prompt": "ChatGpt Input",

"max_tokens": 500,

"temperature": 0,

"n": 1,

"stream": false,

"logprobs": null,

"stop": null

}

No alt text provided for this image

Now flow is ready for the test.

No alt text provided for this image

Test the flow by passing Input string like: " Tell me 5 good things about India."

No alt text provided for this image
No alt text provided for this image

Once the flow has run successfully, you will get a result like this.

No alt text provided for this image

Since we are using ChatGpt through an API, we receive the response in JSON format so to convert this we are using compose action in power automate.


Note: There are indeed multiple ways to convert JSON to text in Power Automate, including the "Parse JSON" action, the "Select" action, and the "Compose" action. But for this demo I have chosen compose action.


3.Handling the HTTP response

“Initialize” Compose variable that will store the results received from HTTP Action.

No alt text provided for this image

Pass the following expression to compose action. This will fetch the 1st element data of the array.

trim(outputs('HTTP')?['body/choices'][0]['Text'])

  • here
  • trim – trims the whitspaces of string
  • Body – output body
  • Choices -- Array
  • [0] – first element – since we got only one element as a result
  • ['Text'] – text field in 1st element


No alt text provided for this image
No alt text provided for this image

save and test the flow.

No alt text provided for this image

Once the flow has run successfully, you will get a result like this. (you can see same in below pic)

1. Rich Cultural Heritage: India is home to a diverse range of cultures, religions, languages, and customs, making it one of the most culturally rich countries in the world.

2. Incredible Natural Beauty: India is home to some of the most stunning landscapes in the world, from the snow-capped Himalayas to the lush green forests of the Western Ghats.

3. Delicious Cuisine: India is known for its delicious and varied cuisine, with each region having its own unique flavors and dishes.

4. World-Class Education: India has some of the best universities in the world, offering world-class education in a variety of fields.

5. Incredible People: India is home to some of the most hospitable and friendly people in the world, making it a great place to visit and explore.

No alt text provided for this image

4. To receive ChatGpt results via mail

To receive Chatgpt results via email, we can add another action.

Action: Send an email.

after passing output result to send email action. Test the flow with one more run.

see the following pictures below.

No alt text provided for this image

Finally, we will receive a response via email as shown below.

No alt text provided for this image

It is possible to build more useful and complex scenarios using ChatGpt. It depends on the user's needs.

--- Thank You ---

要查看或添加评论,请登录

社区洞察

其他会员也浏览了