Skip to main content

Changelog

0.9.1

  • Add tokenCount field to OpenTelemetry-emitted spans. Now, if you're emitting via OpenTelemetry (e.g. to DataDog), the spans will tell you how many tokens each component resolved to. This is helpful for answering quetsions like "how big is my system message?".

0.9.0

  • Breaking: Remove prompt-engineered UseTools. Previously, if you called UseTools with a model that doesn't support native function calling (e.g. Anthropic), UseTools would use a polyfilled version that uses prompt engineering to simulate function calling. However, this wasn't reliable enough in practice, so we've dropped it.
  • Fix issue where gpt-4-32k didn't accept functions.
  • Fix issue where Anthropic didn't permit function call/responses in its conversation history.
  • Add Anthropic's claude-2 models as valid chat model types.
  • Fix issue where Anthropic prompt formatting had extra :s.

0.8.5

  • Fix issue where OpenTelemetry failures were not being properly attributed.

0.8.4

  • Add OpenTelemetry integration for AI.JSX render tracing, which can be enabled by setting the AIJSX_ENABLE_OPENTELEMETRY environment variable.

0.8.3

  • Throw validation errors when invalid elements (like bare strings) are passed to ChatCompletion components.
  • Reduce logspam from memoization.

0.8.2

  • Fix issue where the description field wasn't passed to function definitions.

0.8.1

  • Add support for token-based conversation shrinking via <Shrinkable>.

0.8.0

  • Move MdxChatCompletion to be MdxSystemMessage. You can now put this SystemMessage in any ChatCompletion to prompt the model to give MDX output.

0.7.3

  • Update readme.

0.7.2

  • Add Converse and ShowConversation components facilitate streaming conversations.

0.7.1

  • Change ChatCompletion components to render to <AssistantMessage> and <FunctionCall> elements.

0.7.0

  • Move memo to AI.RenderContext to ensure that memoized components render once, even if placed under a different context provider.

0.6.1

  • Add AIJSX_LOG environment variable to control log level and output location.

0.6.0

  • Update <UseTools> to take a complete conversation as a children prop, rather than as a string query prop.

0.5.16

  • Update toTextStream to accept a logger, so you can now see log output when you're running AI.JSX on the server and outputting to a stream. See AI + UI and Observability.

0.5.15

0.5.14

0.5.13

0.5.12

  • Updated readme.md in the ai-jsx package to fix bugs on the npm landing page.

0.5.11

  • Make JIT UI stream rather than appear all at once.
  • Use openai-edge instead of @nick.heiner/openai-edge

0.5.10

0.5.9

0.5.8

  • ImageGen now produces an Image object which will render to a URL in the command line, but returns an <img /> tag when using in the browser (React/Next).

0.5.7

0.5.6

0.5.5

  • Fix build system issue that caused problems for some consumers.

0.5.4

  • Remove need for projects consuming AI.JSX to set "moduleResolution": "esnext" in their tsconfig.
  • Adding Weights and Biases integration

0.5.3

  • Fix how env vars are read.

0.5.2

  • When reading env vars, read from VAR_NAME and REACT_APP_VAR_NAME. This makes your env vars available to projects using create-react-app.
  • Add OpenAI client proxy.

0.5.1

  • Initial release