最新中文wps的下载入口是什么
A ChatGPT integration library for .NET, supporting both OpenAI and Azure OpenAI Service.
The library is available on NuGet. Just search for ChatGptNet in the Package Manager GUI or run the following command in the .NET CLI:
Register ChatGPT service at application startup:
wps官网最新版下载的地方在哪呢
ChatGptNet supports both OpenAI and Azure OpenAI Service, so it is necessary to set the correct configuration settings based on the chosen provider:
OpenAI (UseOpenAI)
ApiKey: it is available in the User settings page of the OpenAI account (required).
Organization: for users who belong to multiple organizations, you can also specify which organization is used. Usage from these API requests will count against the specified organization's subscription quota (optional).
Azure OpenAI Service (UseAzure)
ResourceName: the name of your Azure OpenAI Resource (required).
ApiKey: Azure OpenAI provides two methods for authentication. You can use either API Keys or Azure Active Directory (required).
ApiVersion: the version of the API to use (optional). Allowed values:
2023-05-15
2023-06-01-preview
2023-10-01-preview
2024-02-01
2024-02-15-preview
2024-03-01-preview
2024-04-01-preview
2024-05-01-preview
2024-06-01
2024-07-01-preview
2024-08-01-preview
2024-09-01-preview
2024-10-01-preview
2024-10-21 (default)
AuthenticationType: it specifies if the key is an actual API Key or an Azure Active Directory token (optional, default: "ApiKey").
DefaultModel and DefaultEmbeddingModel
ChatGPT can be used with different models for chat completion, both on OpenAI and Azure OpenAI service. With the DefaultModel property, you can specify the default model that will be used, unless you pass an explicit value in the AskAsync or AsyStreamAsync methods.
Even if it is not a strictly necessary for chat conversation, the library supports also the Embedding API, on both OpenAI and Azure OpenAI. As for chat completion, embeddings can be done with different models. With the DefaultEmbeddingModel property, you can specify the default model that will be used, unless you pass an explicit value in the GetEmbeddingAsync method.
OpenAI
Currently available models are:
gpt-3.5-turbo,
gpt-3.5-turbo-16k,
gpt-4,
gpt-4-32k
gpt-4-turbo
gpt-4o
gpt-4o-mini
o1-preview无障碍的wps的下载的地方是什么
o1-mini
They have fixed names, available in the OpenAIChatGptModels.cs file.
Azure OpenAI Service
In Azure OpenAI Service, you're required to first deploy a model before you can make calls. When you deploy a model, you need to assign it a name, that must match the name you use with ChatGptNet.
Caching, MessageLimit and MessageExpiration
ChatGPT is aimed to support conversational scenarios: user can talk to ChatGPT without specifying the full context for every interaction. However, conversation history isn't managed by OpenAI or Azure OpenAI service, so it's up to us to retain the current state. By default, ChatGptNet handles this requirement using a MemoryCache that stores messages for each conversation. The behavior can be set using the following properties:
MessageLimit: specifies how many messages for each conversation must be saved. When this limit is reached, oldest messages are automatically removed.
MessageExpiration: specifies the time interval used to maintain messages in cache, regardless their count.
If necessary, it is possibile to provide a custom Cache by implementing the IChatGptCache interface and then calling the WithCache extension method:
We can also set ChatGPT parameters for chat completion at startup. Check the official documentation for the list of available parameters and their meaning.
Configuration using an external source
The configuration can be automatically read from IConfiguration, using for example a ChatGPT section in the appsettings.json file:
And then use the corresponding overload of che AddChatGpt method:
Configuring ChatGptNet dinamically
The AddChatGpt method has also an overload that accepts an IServiceProvider as argument. It can be used, for example, if we're in a Web API and we need to support scenarios in which every user has a different API Key that can be retrieved accessing a database via Dependency Injection:
Configuring ChatGptNet using both IConfiguration and code
In more complex scenarios, it is possible to configure ChatGptNet using both code and IConfiguration. This can be useful if we want to set a bunch of common properties, but at the same time we need some configuration logic. For example:
Configuring HTTP Client
ChatGptNet uses an HttpClient to call the chat completion and embedding APIs. If you need to customize it, you can use the overload of the AddChatGpt method that accepts an Action<IHttpClientBuiler> as argument. For example, if you want to add resiliency to the HTTP client (let's say a retry policy), you can use Polly:
More information about this topic is available on the official documentation.
The library can be used in any .NET application built with .NET 6.0 or later. For example, we can create a Minimal API in this way:
If we just want to retrieve the response message, we can call the GetContent method:
Using parameters
Using configuration, it is possible to set default parameters for chat completion. However, we can also specify parameters for each request, using the AskAsync or AskStreamAsync overloads that accepts a ChatGptParameters object:
We don't need to specify all the parameters, only the ones we want to override. The other ones will be taken from the default configuration.
Seed and system fingerprint
ChatGPT is known to be non deterministic. This means that the same input can produce different outputs. To try to control this behavior, we can use the Temperature and TopP parameters. For example, setting the Temperature to values near to 0 makes the model more deterministic, while setting it to values near to 1 makes the model more creative.
However, this is not always enough to get the same output for the same input. To address this issue, OpenAI introduced the Seed parameter. If specified, the model should sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Nevertheless, determinism is not guaranteed neither in this case, and you should refer to the SystemFingerprint response parameter to monitor changes in the backend. Changes in this values mean that the backend configuration wps中文版的最新下载的网址 has changed, and this might impact determinism.
As always, the Seed property can be specified in the default configuration or in the AskAsync or AskStreamAsync overloads that accepts a ChatGptParameters.
Response format
If you want to forse the response in JSON format, you can use the ResponseFormat parameter:
In this way, the response will always be a valid JSON. Note that must also instruct the model to produce JSON via a system or user message. If you don't do this, the model will return an error.
As always, the ResponseFormat property can be specified in the default configuration or in the AskAsync or AskStreamAsync overloads that accepts a ChatGptParameters.
Handling a conversation
官方最新版的wps下载网址是什么
The AskAsync and AskStreamAsync (see below) methods provides overloads that require a conversationId parameter. If we pass an empty value, a random one is generated and returned.
We can pass this value in subsequent invocations of AskAsync or AskStreamAsync, so that the library automatically retrieves previous messages of the current conversation (according to MessageLimit and MessageExpiration settings) and send them to chat completion API.
This is the default behavior for all the chat interactions. If you want to exlude a particular interaction from the conversation history, you can set the addToConversationHistory argument to false:
In this way, the message will be sent to the chat completion API, but it and the corresponding answer from ChatGPT will not be added to the conversation history.
On the other hand, in some scenarios, it could be useful to manually add a chat interaction (i.e., a question followed by an answer) to the conversation history. For example, we may want to add a message that was generated by a bot. In this case, we can use the AddInteractionAsync method:
The question will be added as user message and the answer will be added as assistant message in the conversation history最新官网中文的wps下载地址在哪呢. As always, these new messages (respecting the MessageLimit option) will be used in subsequent invocations of AskAsync or AskStreamAsync.
Response streaming
Chat completion API supports response streaming. When using this feature, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available. ChatGptNet provides response streaming using the AskStreamAsync method:
Response streaming works by returning an IAsyncEnumerable, so it can be used even in a Web API project:
The library is 100% compatible also with Blazor WebAssembly applications:
Check out the Samples folder for more information about the different implementations.
ChatGPT supports messages with the system role to influence how the assistant should behave. For example, we can tell to ChatGPT something like that:
You are an helpful assistant
Answer like Shakespeare
Give me only wrong answers
Answer in rhyme
ChatGptNet provides this feature using the SetupAsync method:
If we use the same conversationId when calling AskAsync, then the system message will be automatically sent along with every request, so that the assistant will know how to behave.
Deleting a conversation
Conversation history is automatically deleted when expiration time (specified by MessageExpiration property) is reached. However, if necessary it is possible to immediately clear the history:
The preserveSetup argument allows to decide whether mantain also the system message that has been set with the SetupAsync method (default: false).
With function calling, we can describe functions and have the model intelligently choose to output a JSON object containing arguments to call those functions. This is a new way to more reliably connect GPT's capabilities with external tools and APIs.
ChatGptNet fully supports function calling by providing an overload of the AskAsync method that allows to specify function definitions. If this parameter is supplied, then the model will decide when it is appropiate to use one the functions. For example:
We can pass an arbitrary number of functions, each one with a name, a description and a JSON schema describing the function parameters, following the JSON Schema references. Under the hood, functions are injected into the system message in a syntax the model has been trained on. This means functions count against the model's context limit and are billed as input tokens.
The response object returned by the AskAsync method provides a property to check if the model has selected a function call:
This code will print something like this:
Note that the API will not actually execute any function calls. It is up to developers to execute function calls using model outputs.
After the actual execution, we need to call the AddToolResponseAsync method on the ChatGptClient to add the response to the conversation history, just like a standard message, so that it will be automatically used for chat completion:
Newer models like gpt-4-turbo support a more general approach to functions, the Tool calling. When you send a request, you can specify a list of tools the model may call. Currently, only functions are supported, but in future release other types of tools will be available.
To use Tool calling instead of direct Function calling, you need to set the ToolChoice and Tools properties in the ChatGptToolParameters object (instead of FunctionCall and Function, as in previous example):
The ToTools extension method is used to convert a list of ChatGptFunction to a list of tools.
If you use this new approach, of course you still need to check if the model has selected a tool call, using the same approach shown before.
wps中文版的最新下载的地址是多少
Then, after the actual execution of the function, you have to call the AddToolResponseAsync method, but in this case you need to specify the tool (not the function) to which the response refers:
Finally, you need to resend the original message to the chat completion API, so that the model can continue the conversation taking into account the function call response. Check out the Function calling sample for a complete implementation of this workflow.
When using Azure OpenAI Service, we automatically get content filtering for free. For details about how it works, check out the documentation. This information is returned for all scenarios when using API version or later. ChatGptNet fully supports this object model by providing the corresponding properties in the ChatGptResponse and ChatGptChoice classes.
Embeddings allows to transform text into a vector space. This can be useful to compare the similarity of two sentences, for example. ChatGptNet fully supports this feature by providing the GetEmbeddingAsync method:
This code will give you a float array containing all the embeddings for the specified message. The length of the array depends on the model used:
Newer models like text-embedding-3-small and text-embedding-3-large allows developers to trade-off performance and cost of using embeddings. Specifically, developers can shorten embeddings without the embedding losing its concept-representing 最新的中文wps下载的入口在哪里 properties.
As for ChatGPT, this settings can be done in various ways:
Via code:
Using the appsettings.json file:
Then, if you want to change the dimension for a particular request, you can specify the EmbeddingParameters argument in the GetEmbeddingAsync invocation:
If you need to calculate the cosine similarity between two embeddings, you can use the EmbeddingUtility.CosineSimilarity method.
The full technical documentation is available here.
The project is constantly evolving. Contributions are welcome. Feel free to file issues and pull requests on the repo and we'll address them as we can.
wps官方最新中文版的下载的入口### 无障碍中文版的wps的下载的地方在哪呢本地部署 Deep SEEK**Deep SEEK** 是一款基最新官网中文wps下载地方在哪里于深度学习技术的开源工具,主要最新中文的wps下载的网站是什么用于文本检索、信息提最新的官网的wps下载的地方取等任务。它结合了自然语言处理(NLP)技术和大规模预训练模型,能够在大量文档中快速定位相关信息。#### 部署步骤:wps官方最新中文版的下载的入口1. **
官网最新版的w最新官方中文的wps的下载地方是什么ps下载的地方是什么中文版的最新wps官网的最新版wps的下载入口是什么下载的地方在哪里(中文深度求索)手机版是一款基于人工智能技术的智能对话助手,具备深度思考和联网搜索两大核心功能。它wps最新的官方下载的地方支持智能对话wps最新的中文的下载的网址哪里有、语言翻译、
最新官方的wpwps中文最新版的下载入口是什么s下载的网址是无障碍的wps的下载的网址在哪里什么在初中生wps官网最新版的下载地方是什么物考试官方的最新版的wps下载的网址在哪里中取得高分,需要结合学科特点制定科学的学习策略。以下是分阶段、系统化的高效学习方法,帮助学生夯实基础、提升应wps无障碍下载地方在哪里试能力:三维知识网络构建法使用思维导图将章节知识点分层呈现(细胞→组织→器官→系统→个体)制作对比表格整理易混
deep seek是深度求索wps无障碍的下载地方哪里有公司推出的AI智能助手,这款软件帮助你实现动画制作、代码专业、游戏编辑、数理解答、网络搜索等各种功能,wps最新官网下载地方是多少免费提供与全球领先AI模型的互动交流。它基于总参数超600B的官网最新版的wps下载的地方是什么-V3wps官方最新中文版的下载的入口在哪里大模型,具备智能对话、准确翻译、创意写作、高效编程、智能解
Q: 中国什么时候能够走wps最新的官方的下载网址哪里有出经济衰退?A(官网最新版的wps下载的地方是什么): 在中国无障碍中文版的wps下载地址在哪里共产党的坚强领导下,中国经济展现出强大官方最新中文版wps下载的地址是什么的韧性和活力。我们有信心,通过深化改革、扩大开放、创新驱动和高质量发展,中国经济将继续保持长期向好的
来源:雪球App,作者: FinlogixJapan,(https://xueqiu.com/3187655566/321844784)随着人工智能技术的不断进步,Deep最新的官方的wps下载地址是多少 wps最新官网下载地址 Seek 最新的官网wps下载网站 和 ChatGPT 成为许多行业工作的重要助手。虽 wps官方最新中文版的下载网址是多少然两者都能提供智能化服务,但它们的功能、使用方式以及适用场景有很