1. Environment

    Python>=3.7.8
    PyTorch>=1.8.1+cu102
    TorchVision>=0.9.1+cu102

    >> pip install jiant packaging==21.3 transformers


2. Datasets

    Download the "data/tasks" from SAM [10]: https://github.com/fuzihaofzh/AnalyzeParameterEfficientFinetune


3. Training with "mrpc" (The hparams for the other datasets can be found in Appendix): 

    (stage-1)
    >> python3 demo_stage1.py run --data_dir [DATA-DIR] --exp_dir output/exps/mrpc --hf_pretrained_model_name_or_path roberta-base --tasks mrpc --train_batch_size 16 --num_train_epochs 10000 --min_train_steps 20000 --eval_every_steps 500 --keep_checkpoint_when_done --do_test --log_dir output/logs/runs/test/ --learning_rate 1e-4 --no_improvements_for_n_evals 40 --user_mode alp=0.01,bet=100,lamb_init=0.0001,lr_others=0.01,Nexp=12,nd=1,burnin_steps=12000,thin_steps=100,warmup_steps=10000 --seed 0 --run_name trial1_stage1_seed0 

    (stage-2)
    >> python3 demo_stage2.py run --data_dir [DATA-DIR] --exp_dir output/exps/mrpc --hf_pretrained_model_name_or_path roberta-base --tasks mrpc --train_batch_size 16 --num_train_epochs 10000 --min_train_steps 20000 --eval_every_steps 500 --keep_checkpoint_when_done --do_test --log_dir output/logs/runs/test/ --learning_rate 1e-3 --no_improvements_for_n_evals 40 --user_mode splevel=0.005,mcmc=0,lamb_path=output/exps/mrpc/runs/trial1_stage1_seed0/lambda_stats.pt --seed 0 --run_name trial1_stage2_mcmc0_seed0 
