Skip to content

Overview

Mathy's embeddings model takes in a window of observations and outputs a sequence of the same length with fixed-size learned embeddings for each token in the sequence.

Model

The Mathy embeddings model is a stateful model that predicts over sequences. This complicates the process of collecting observations to feed to the model, but allows richer input features than would be available from the simpler state representation.

The model accepts an encoded sequence of tokens and values extracted from the current state's expression tree.

Examples

Observations to Embeddings

You can instantiate a model and produce untrained embeddings:

Open Example In Colab

from mathy import envs
from mathy.agents.base_config import BaseConfig
from mathy.agents.embedding import MathyEmbedding
from mathy.env import MathyEnv
from mathy.state import MathyObservation, observations_to_window


args = BaseConfig()
env: MathyEnv = envs.PolySimplify()
observation: MathyObservation = env.state_to_observation(env.get_initial_state()[0])
model = MathyEmbedding(args)
# output shape is: [num_observations, max_nodes_len, embedding_dimensions]
inputs = observations_to_window([observation, observation]).to_inputs()
embeddings, attentions = model(inputs)

# We provided two observations in a sequence
assert embeddings.shape[0] == 2
# There are as many outputs as input sequences
assert embeddings.shape[1] == len(observation.nodes)
# Outputs vectors with the provided embedding units
assert embeddings.shape[-1] == args.embedding_units

# The attention output is a grid of [num_observations, max_nodes_len, max_nodes_len]
assert attentions.shape == [2, len(observation.nodes), len(observation.nodes)]


Last update: February 22, 2020