This directory contains the Python model related files for training and inferencing RL4ReAl based register allocator described by the following work.
This repo contains the source code and relevant information described in the paper (arXiv). Please see here for more details.
RL4ReAl: Reinforcement Learning for Register Allocation, S. VenkataKeerthy, Siddharth Jain, Anilava Kundu, Rohit Aggarwal, Albert Cohen and Ramakrishna Upadrasta
Setup the environment using the model/RL4ReAl/rl4real_env.yml using the following commands
conda env create -f rl4real_env.ymlCreate a .env file in the path model\RL4Real\src.The .env file contains the necessary environment variables. Refer .env.example present in model\RL4Real\src for setting the required variables.
MODEL_DIR= <path/to/model/dir>BUILD_DIR= <path/to/build/dir>MODEL_PATH= <path/to/model/checkpoint>CONFIG_DIR= <path/to/config/dir>DATA_DIR= <path/to/dataset/dir>
Dataset generation can be done using the bash scripts located in model/RL4ReAl/preprocessing/v0
-
File
flow.shunder the pathmodel/RL4ReAl/preprocessing/v0/contains script to generate dataset. -
It can be executed as follows:
bash flow.sh <target_architecure> train <model>
-
target_architecure: Specify either x86 or aarch64. Currently we only support these architectures. -
model: Indicate the model type, e.g., mlra -
Specify the
DATA_DIRenvironment variable in.envfile with path to generated dataset, which the model uses for training
Pre-existing Datasets from open source repositories can be utilized for model training
- Specify the path to model file in the
DATA_DIRenvironment variable in the.envfile
Activate the rllib_env_2.2.0 environment.
- Run the following command
python experiment_ppo.py
- Parameters for training should be configured by setting the variables in
model/RL4ReAl/src/ppo_new.pynum_rollout_workers: Number of workers that can run in parallel.num_gpus: Number of GPUs that can be utilized.current_batch: Batch size for trainingepisode_numbercheck: Number of training episodes
Training logs are written in ~/ray_results directory by default.
- Customize the path using the following syntax in
experiment_ppo.pyray.init(_temp_dir="<path_to_raylog>")
- LLVM Logs are generated in the directory
ml-llvm-project/model/RL4ReAl/src/log.logs: Alternate log directory can be specified inppo_new.py