latest
Overview
Key Features
Library API Example
Installation
Getting Started
Reference
License
Examples
Examples by Models/Algorithms
RNN / Seq2seq
Transformer (Self-attention)
Variational Autoencoder (VAE)
GANs / Discriminator-supervision
Reinforcement Learning
Memory Network
Classifier / Sequence Prediction
Reward Augmented Maximum Likelihood (RAML)
Examples by Tasks
Language Modeling
Machine Translation
Dialog
Text Summarization
Text Style Transfer
Classification
Sequence Tagging
Games
MISC
Distributed training
HParams
Data
Tokenizers
TokenizerBase
BERTTokenizer
XLNetTokenizer
Vocabulary
SpecialTokens
Vocab
Embedding
Embedding
load_word2vec
load_glove
Data
DataBase
MonoTextData
PairedTextData
ScalarData
TFRecordData
MultiAlignedData
TextDataBase
Data Iterators
DataIteratorBase
DataIterator
TrainTestDataIterator
FeedableDataIterator
TrainTestFeedableDataIterator
Data Utils
random_shard_dataset
maybe_tuple
make_partial
maybe_download
read_words
make_vocab
count_file_lines
make_chained_transformation
make_combined_transformation
Core
Cells
default_rnn_cell_hparams
get_rnn_cell
get_rnn_cell_trainable_variables
Layers
get_layer
MaxReducePooling1D
AverageReducePooling1D
get_pooling_layer_hparams
MergeLayer
SequentialLayer
default_regularizer_hparams
get_regularizer
get_initializer
get_activation_fn
get_constraint_fn
default_conv1d_kwargs
default_dense_kwargs
Optimization
default_optimization_hparams
get_train_op
get_optimizer_fn
get_optimizer
get_learning_rate_decay_fn
get_gradient_clip_fn
Exploration
EpsilonLinearDecayExploration
ExplorationBase
Replay Memories
DequeReplayMemory
ReplayMemoryBase
Modules
ModuleBase
Embedders
WordEmbedder
PositionEmbedder
SinusoidsPositionEmbedder
EmbedderBase
Encoders
UnidirectionalRNNEncoder
BidirectionalRNNEncoder
HierarchicalRNNEncoder
MultiheadAttentionEncoder
TransformerEncoder
BERTEncoder
Conv1DEncoder
EncoderBase
RNNEncoderBase
XLNetEncoder
default_transformer_poswise_net_hparams
Decoders
RNNDecoderBase
BasicRNNDecoder
BasicRNNDecoderOutput
AttentionRNNDecoder
AttentionRNNDecoderOutput
GPT2Decoder
beam_search_decode
TransformerDecoder
TransformerDecoderOutput
Helper
GreedyEmbeddingHelper
SampleEmbeddingHelper
TopKSampleEmbeddingHelper
SoftmaxEmbeddingHelper
GumbelSoftmaxEmbeddingHelper
TrainingHelper
ScheduledEmbeddingTrainingHelper
ScheduledOutputTrainingHelper
InferenceHelper
CustomHelper
get_helper
Classifiers
Conv1DClassifier
UnidirectionalRNNClassifier
BertClassifier
XLNetClassifier
Regressors
XLNetRegressor
Pre-trained
PretrainedMixin
PretrainedBERTMixin
PretrainedXLNetMixin
Connectors
ConnectorBase
ConstantConnector
ForwardConnector
MLPTransformConnector
ReparameterizedStochasticConnector
StochasticConnector
Networks
FeedForwardNetworkBase
FeedForwardNetwork
Conv1DNetwork
Memory
MemNetBase
MemNetRNNLike
default_memnet_embed_fn_hparams
Policy
PolicyNetBase
CategoricalPolicyNet
Q-Nets
QNetBase
CategoricalPolicyNet
Agents
Sequence Agents
SeqPGAgent
Episodic Agents
EpisodicAgentBase
PGAgent
DQNAgent
ActorCriticAgent
Agent Utils
Space
EnvConfig
convert_gym_space
get_gym_env_config
Loss Functions
MLE Loss
sequence_softmax_cross_entropy
sequence_sparse_softmax_cross_entropy
sequence_sigmoid_cross_entropy
binary_sigmoid_cross_entropy
binary_sigmoid_cross_entropy_with_clas
Policy Gradient Loss
pg_loss_with_logits
pg_loss_with_log_probs
Reward
discount_reward
Adversarial Loss
binary_adversarial_losses
Entropy
entropy_with_logits
sequence_entropy_with_logits
Loss Utils
mask_and_reduce
reduce_batch_time
reduce_dimensions
Evaluations
BLEU
sentence_bleu
corpus_bleu
sentence_bleu_moses
corpus_bleu_moses
Accuracy
accuracy
binary_clas_accurac
Models
ModelBase
Seq2seqBase
BasicSeq2seq
Executor
Context
Global Mode
global_mode
global_mode_train
global_mode_eval
global_mode_predict
valid_modes
Utils
Frequent Use
AverageRecorder
collect_trainable_variables
compat_as_text
map_ids_to_strs
write_paired_text
straight_through
Variables
collect_trainable_variables
get_unique_named_variable_scope
add_variable
IO
write_paired_text
load_config
maybe_create_dir
get_files
DType
compat_as_text
get_tf_dtype
is_callable
is_str
is_placeholder
maybe_hparams_to_dict
Shape
mask_sequences
transpose_batch_time
get_batch_size
get_rank
shape_list
pad_and_concat
reduce_with_weights
flatten
varlength_concat
varlength_concat_py
varlength_roll
Dictionary
dict_patch
dict_lookup
dict_fetch
dict_pop
flatten_dict
String
map_ids_to_strs
strip_token
strip_eos
strip_special_tokens
str_join
default_str
uniquify_str
Meta
check_or_get_class
get_class
check_or_get_instance
get_instance
check_or_get_instance_with_redundant_kwargs
get_instance_with_redundant_kwargs
get_function
call_function_with_redundant_kwargs
get_args
get_default_arg_values
get_instance_kwargs
Mode
switch_dropout
maybe_global_mode
is_train_mode
is_eval_mode
is_predict_mode
is_train_mode_py
is_eval_mode_py
is_predict_mode_py
Misc
ceildiv
straight_through
truncate_seq_pair
AverageRecorder
Texar
Docs
»
Welcome to Texar’s documentation!
Edit on GitHub
Welcome to Texar’s documentation!
¶
Texar is a modularized, versatile, and extensible toolkit for text generation tasks and beyond.
Overview
Examples
Examples by Models/Algorithms
Examples by Tasks
MISC
API
¶
HParams
Data
Tokenizers
Vocabulary
Embedding
Data
Data Iterators
Data Utils
Core
Cells
Layers
Optimization
Exploration
Replay Memories
Modules
ModuleBase
Embedders
Encoders
Decoders
Classifiers
Regressors
Pre-trained
Connectors
Networks
Memory
Policy
Q-Nets
Agents
Sequence Agents
Episodic Agents
Agent Utils
Loss Functions
MLE Loss
Policy Gradient Loss
Reward
Adversarial Loss
Entropy
Loss Utils
Evaluations
BLEU
Accuracy
Models
ModelBase
Seq2seqBase
BasicSeq2seq
Executor
Context
Global Mode
Utils
Frequent Use
Variables
IO
DType
Shape
Dictionary
String
Meta
Mode
Misc
AverageRecorder