A WRF example
Contents
Preparations
This page contains the steps I went through to run NOAH-MP coupled to WRF, forced by Era-Interim data. It requires that WRF is set up; that the data are downloaded (see the page on downloading ECMWF data in this Wiki). I download data using these lines:
cd /projects/researchers/researchers01/irenebn ./wrf_Era-I_scand.py --start_year 2013 --end_year 2013 --start_month 07 --end_month 07 --start_day 01 --end_day 01 (setting the start and end day equal ensures that each day is saved in one file each, which I prefer).
If you haven't done that, please refer to the data page and the NOAH page.
Get NORSTORE access
Abel users have a disk capacity of 200 GB, but it is easy to apply for 1TB storage through NORSTORE. Apply for a user through metacenter.no/user/application/, and you'll get access through Abel (this storage is also called ASTRA):
ssh -YC irenebn@abel.uio.no cd projects/researchers/researchers01/irenebn
NORSTORE is a natural place to save your forcing data. I have saved my Era-Interim data in a folder called projects/researchers/researchers01/irenebn/Scandinavia.
This ASTRA folder can also be accessed from sverdrup, wessel and other UiO computers, using ssh:
ssh -X irenebn@sverdrup.uio.no mkdir /var/sshfs/irenebn # <-- replace irenebn with your username sshfs irenebn@abel.uio.no:/projects/researchers/researchers01/irenebn /var/sshfs/irenebn
From now on (you might want to repeat the third line, sshfs...), your files are accessed through /var/sshfs/irenebn
A note on the file structure on Abel
When you log onto Abel, you access a login node. From here, you may access your files, either on your Abel home directory, or Astra. You should not run complex jobs on the Abel login node, but you should submit that as a job (see section "job_wrf.sh"). The job is run in your work directory:
$WORKDIR , or work/users/<username> (for instance work/users/irenebn)
Note that your files will automatically be deleted from $WORKDIR after three weeks, so don't store any important files here.
It is good practice to create different folders in your $WORKDIR (one for each simulation), matched by a folder in your Abel home directory. For instance, I would have a folder named
/usit/abel/u1/irenebn/Scandinavia/sensitivity/noahMP
to store the namelist files for my sensitivity tests with Noah-MP. However, I run the jobs in the folder
/work/users/irenebn/noahMP/ (on the $WORKDIR)
and when the job is finished, I move the output files and namelist files from /work/users/irenebn/noahMP to projects/researchers/researchers01/irenebn/noahMP. See more in chapter 4 below.
Get NOTUR access through a project
If you plan to submit large, or many, jobs, consider getting access to NOTUR thorugh a project (for instance the LATICE account NN9379k). Otherwise, you'll submit your jobs to the geofag account on abel.
To check which accounts you have access to on Abel, type 'cost':
cost
Report for accounts with user irenebn on abel.uio.no Allocation period 2016.2 (start Sat Oct 01 00:00:01 2016) (end Fri Mar 31 23:59:59 2017) Last updated on Tue Nov 01 15:31:24 2016 ================================================== Account Core hours ================================================== nn9373k avail 9999.50 nn9373k usage 0.50 nn9373k reserved 0.00 nn9373k quota (pri) 10000.00 nn9373k quota (unpri) NA -------------------------------------------------- geofag avail NA geofag usage NA geofag reserved 9600.00 ==================================================
Go through the WRF tutorial
The WRF tutorial on the Geo-IT wiki is very helpful, especially if you're forcing WRF with Era-Interim data. wiki.uio.no/mn/geo/geoit/index.php/WRFand_WRF-CHEM
Consider signing up for m2lab
m2lab.org/ offers online courses in WRF. I highly recommend the "Regional Climate Modelling Using WRF", which teaches how to design your model experiment.
Consider getting the book by Warner (2011)
The book Numerical Weather and Climate Prediction by Thomas Tomkins Warner (2011) is super relevant for WRF modellers. I didn't find it at the UiO library (but can be borrowed from UiB).
namelist.wps
When you have designed your experiment domain (domain resolution, size, position, nesting as well as time period), prepare namelist.wps. My namelist.wps looks like this:
&share wrf_core = 'ARW', max_dom = 2, start_date = '2013-07-01_00:00:00','2013-07-01_00:00:00', end_date = '2014-04-01_00:00:00','2014-04-01_00:00:00', interval_seconds = 21600, io_form_geogrid = 2, / &geogrid parent_id = 1, 1, parent_grid_ratio = 1, 5, i_parent_start = 1, 35, j_parent_start = 1, 31, e_we = 99, 156, e_sn = 99, 186, geog_data_res = '10m', '2m', dx = 15000, dy = 15000, map_proj = 'lambert', ref_lat = 60.5, ref_lon = 8.9, truelat1 = 60.5, truelat2 = 60.5, stand_lon = 8.9, geog_data_path = ' ', / &ungrib out_format = 'WPS', prefix = 'FILE', / &metgrid fg_name = 'FILE', constants_name = 'LSM:2013-07-01_00', 'Z:2013-07-01_00', io_form_metgrid = 2, /
define_grid.py
To plot the domain, type
define_grid.py --path <path to namelist.wps> define_grid.py --path /usit/abel/u1/irenebn/Scandinavia/sensitivity/noahMP
namelist.input
When you have decided which parameterizations to use (land surface model, boundary layer, microphysics, convection and radiation), prepare your namelist.input file. Mine looks like this. You may note that I have two domains (two columns), SST update (seen from the sst_update=1 and the auxinput4 lines in &time_control), and I'm using Noah-MP with default options ( sf_surface_physics=4, but I have no noah-mp options ..yet).
&time_control run_days = 7, run_hours = 0, run_minutes = 0, run_seconds = 0, start_year = 2013, 2013, start_month = 07, 07, start_day = 01, 01, start_hour = 00, 00, start_minute = 00, 00, start_second = 00, 00, end_year = 2013, 2013, end_month = 07, 07, end_day = 08, 08, end_hour = 00, 00, end_minute = 00, 00, end_second = 00, 00, interval_seconds = 21600 input_from_file = .true.,.true., history_interval = 180, 60, frames_per_outfile = 2300, 2300, restart = .false., restart_interval = 43200, io_form_history = 2, io_form_restart = 2, io_form_input = 2, io_form_auxinput4 = 2, auxinput4_inname = "wrflowinp_d<domain>", auxinput4_interval = 360, 360, io_form_boundary = 2, debug_level = 0, / &domains time_step = 60, time_step_fract_num = 0, time_step_fract_den = 1, max_dom = 1, s_we = 1, 1, e_we = 99, 156, s_sn = 1, 1, e_sn = 99, 186, s_vert = 1, 1, e_vert = 50, 50, p_top_requested = 1000, num_metgrid_levels = 50, # <-- change this to 38 levels to fix the error message num_metgrid_soil_levels = 4, dx = 15000, 3000, dy = 15000, 3000, grid_id = 1, 2, parent_id = 1, 1, i_parent_start = 1, 35, j_parent_start = 1, 31, parent_grid_ratio = 1, 5, parent_time_step_ratio = 1, 5, feedback = 1, smooth_option = 0, max_ts_locs = 4, ts_buf_size = 6600, max_ts_level = 1, / &physics mp_physics = 3, 3, ra_lw_physics = 4, 4, ra_sw_physics = 4, 4, radt = 15, 15, sf_sfclay_physics = 1, 1, sf_surface_physics = 4, 4, bl_pbl_physics = 1, 1, bldt = 0, 0, cu_physics = 1, 0, cudt = 5, 5, isfflx = 1, ifsnow = 1, icloud = 1, surface_input_source = 1, num_soil_layers = 4, sf_urban_physics = 0, 0, sst_update = 1, / &fdda / &dynamics w_damping = 0, diff_opt = 1, 1, km_opt = 4, 4, diff_6th_opt = 0, 0, diff_6th_factor = 0.12, 0.12, base_temp = 290. damp_opt = 0, zdamp = 5000.,5000., dampcoef = 0.2, 0.2, khdif = 0, 0, kvdif = 0, 0, non_hydrostatic = .true.,.true., moist_adv_opt = 1, 1, scalar_adv_opt = 1, 1, / &bdy_control spec_bdy_width = 5, spec_zone = 1, relax_zone = 4, specified = .true., .false., nested = .false., .true., / &grib2 / &namelist_quilt nio_tasks_per_group = 0, nio_groups = 1, /
job_wrf.sh
Next, you'll need a job script to submit your jobs to Abel. This script uses python functions unique to the Abel structure. You'll see the .py scripts here (but you're not allowed to edit these): /cluster/software/VERSIONS/wrf/3.7.1/bin
My job_wrf.sh looks like this. Most parts are commented out, because I usually only want to execute one of the functions at a time. Otherwise, I won't see what might have gone wrong with real.exe (run_init_wrf.py) if wrf.exe starts automatically and over-writes all rsl.error.000x files.
#!/bin/bash # Job name: #SBATCH --job-name=noahMP # # Project: #SBATCH --account=geofag # # Wall clock limit: # SBATCH --time=40:0:0 # # Max memory usage per task: #SBATCH --mem-per-cpu=2000M # # Number of tasks (cores): #SBATCH --ntasks=64 # # Set up job environment (do not remove this line...) source /cluster/bin/jobsetup ulimit -s unlimited export WRF_HOME=/cluster/software/VERSIONS/wrf/3.7.1/WRFV3 # This is to make sure I don't use WRF-CHEM export WRF_CHEM=0 module load wrf #define_grid.py --path /usit/abel/u1/irenebn/Scandinavia/sensitivity/noahMP # Run geogrid.exe to create static data for this domain: #run_geogrid.py -p /usit/abel/u1/irenebn/Scandinavia/sensitivity/noahMP --expid sensitivity/noahMP # run_ungrib.py --expid sensitivity/noahMP \ # --start_date "2013-07-01_00:00:00" \ # --end_date "2014-04-01_00:00:00" \ # --interval_seconds 21600 \ # --prefix FILE \ # --vtable /usit/abel/u1/irenebn/Scandinavia/sensitivity/Vtable.EI \ # --datadir /projects/researchers/researchers01/irenebn/Scandinavia/ # run_ungrib.py --expid sensitivity/noahMP \ # --start_date "2013-07-01_00:00:00" \ # --end_date "2014-04-01_00:00:00" \ # --interval_seconds 21600 \ # --prefix LSM \ # --vtable /usit/abel/u1/irenebn/Scandinavia/sensitivity/Vtable.EI \ # --datadir /usit/abel/u1/irenebn/Scandinavia/sensitivity/noahMP/DATA/CONST/lsm # run_ungrib.py --expid sensitivity/noahMP \ # --start_date "2013-07-01_00:00:00" \ # --end_date "2014-04-01_00:00:00" \ # --interval_seconds 21600 \ # --prefix Z \ # --vtable /usit/abel/u1/irenebn/Scandinavia/sensitivity/Vtable.EI \ # --datadir /usit/abel/u1/irenebn/Scandinavia/sensitivity/noahMP/DATA/CONST/z # run_metgrid.py --expid sensitivity/noahMP --tbl /usit/abel/u1/irenebn/Scandinavia/sensitivity/METGRID.TBL # run_init_wrf.py --expid sensitivity/noahMP --namelist /usit/abel/u1/irenebn/Scandinavia/sensitivity/noahMP /namelist.input run_wrf.py --expid sensitivity/noahMP --npes 64 --namelist /usit/abel/u1/irenebn/Scandinavia/sensitivity/noahMP /namelist.input # REMEMBER TO REMOVE NAMELIST.INPUT FROM THE WORK DIRECTORY # Copy your data on your local machine rsync -avz $WORKDIR/sensitivity/noahMP/namelist.input /projects/researchers/researchers01/irenebn/sensitivity/noahMP rsync -avz $WORKDIR/sensitivity/noahMP/namelist.output /projects/researchers/researchers01/irenebn/sensitivity/noahMP rsync -avz $WORKDIR/sensitivity/noahMP/rsl.out.0000 /projects/researchers/researchers01/irenebn/sensitivity/noahMP rsync -avz $WORKDIR/sensitivity/noahMP/rsl.error.0000 /projects/researchers/researchers01/irenebn/sensitivity/noahMP # rsync -avz $WORKDIR/noahMP/wrfou* /projects/researchers/researchers01/irenebn/sensitivity/noahMP # rsync -avz $WORKDIR/noahMP/wrfrst* /projects/researchers/researchers01/irenebn/sensitivity/noahMP
run_geogrid.py
You need to run geogrid each time you change the domain (if namelist.wps is changed). If you run geogrid again, you need to run all the other functions, too. Remember to delete old files to avoid error messages. For instance, if you increase the domain size, your met.em files are no longer valid, so you need to delete all met.em files before re-running geogrid and ungrib and metgrid. Remember also to remove namelist.wps and namelist.input from $WORKDIR before re-running, because you'll get frustrated if you think you have done everything right, whereas WRF has used the old namelist file.
This is how you do it (see also job_wrf.sh):
run_geogrid.py -p <path to folder containing namelist.wps> --expid <sub-folder under $WORKDIR> run_geogrid.py -p /usit/abel/u1/irenebn/Scandinavia/sensitivity/noahMP --expid noahMP
After geogrid has run, look for:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ! Successful completion of geogrid. ! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
and check that the following files are generated:
/work/users/irenebn/noahMP/geo_em.d01.nc /work/users/irenebn/noahMP/GEOGRID.TBL /work/users/irenebn/noahMP/geogrid.log
.d01 means the outer domain. If you have more than one domain, you could get more than one file.
It is good practice to check geogrid.log. It will give you useful information if something went wrong. But hopeully, it says:
*** Successful completion of program geogrid.exe ***
tail /work/users/irenebn/noahMP/geogrid.log
run_ungrib.py
Ungrib interpolates the input/forcing data in time. This is how you run ungrib (see also job_wrf.sh):
run_ungrib.py --expid <sub-folder under $WORKDIR> <start and end dates> --interval_seconds 21600 means every 6 hours --prefix <if your namelist.wps says FILE, this should say FILE> --vtable <path to folder containing Vtable.EI if you're using Era-Interim> --datadir <path to folder containing input/forcing data> run_ungrib.py --expid noahMP --start_date "2013-07-01_00:00:00" --end_date "2014-04-01_00:00:00" --interval_seconds 21600 --prefix FILE --vtable /usit/abel/u1/irenebn/Scandinavia/sensitivity/Vtable.EI --datadir /projects/researchers/researchers01/irenebn/Scandinavia/
run_ungrib.py --expid noahMP --start_date "2013-07-01_00:00:00" --end_date "2014-04-01_00:00:00" --interval_seconds 21600 --prefix LSM --vtable /usit/abel/u1/irenebn/Scandinavia/sensitivity/Vtable.EI --datadir /usit/abel/u1/irenebn/Scandinavia/sensitivity/noahMP/DATA/CONST/lsm
run_ungrib.py --expid noahMP --start_date "2013-07-01_00:00:00" --end_date "2014-04-01_00:00:00" --interval_seconds 21600 --prefix Z --vtable /usit/abel/u1/irenebn/Scandinavia/sensitivity/Vtable.EI --datadir /usit/abel/u1/irenebn/Scandinavia/sensitivity/noahMP/DATA/CONST/z
After ungrib has run, check
tail /work/users/irenebn/noahMP/ungrib.log
Potential error messages arrive when ungrib does not find the input files to interpolate. Here, for instance, I have to download data for 1. July 2013, because they are not present in the folder where I have stored the input.
2016-11-01 17:00:56.602 --- Looking for data at time 2013-07-01_00 2016-11-01 17:00:56.602 --- ERROR: Data not found: 2013-07-01_00:00:00.0000
When ungrib.log says that everything is fine, check that the following files are generated:
/work/users/irenebn/noahMP/FILE:2013-07-01_00 (one file per date) /work/users/irenebn/noahMP/LSM:2013-07-01_00 (one file) /work/users/irenebn/noahMP/Z:2013-07-01_00 (one file) /work/users/irenebn/noahMP/ungrib.log /work/users/irenebn/noahMP/ungrib_data.log
It is good practice to check the files. These are written on an intermediate format, so you open them with the program rd_intermediate:
rd_intermediate.exe FILE:2013-07-01_00
Submit a job to the queue
To submit a job to the Abel queuing system, make sure that job_wrf is correct (it is advisable to test a small time step first). It is very easy to type the wrong file path so you don't find your files, or forget to uncomment the lines you wanted to uncomment.
sbatch job_wrf.sh
To check if the job is running, type either of the two:
squeue --user irenebn squeue --account geofag ___________ Output: JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 15842700 normal noahMP irenebn R 4:42 4 c5-[13,17,22-23]
If your job doesn't start, and you get strange reasons (see below), try reducing the mem-per-cpu, or nodes in your jobscript.
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 15848597 normal noahMP irenebn PD 0:00 16 (AssocGrpMemLimit) # <-- reduce mem-per-cpu
While the job is running, you may check the logfile and the files being produced.
tail slurm-15842700.out ll /work/users/irenebn/noahMP/met_em.d01.2013-07-0*
You may also look at the files that are generated (while the job is running):
module load ncview ncview /work/users/irenebn/noahMP/met_em.d01.2013-07-03_00\:00\:00.nc &
run_metgrid.py
Metgrid takes care of the horizontal interpolation. (information in the vertical are not touched by metgrid, for instance, if you want to run both Noah-MP and CLM, you may use the same met_em files, although Noah-MP has 4 soil layers and CLM has 10. Therefore, you may use the same met_em files for sensitivity studies. Nice, huh?)
This is how you run metgrid (see also job_wrf.sh):
run_metgrid.py --expid <sub-folder under $WORKDIR> --tbl <path to folder containing METGRID.TBL> run_metgrid.py --expid sensitivity/noahMP --tbl /usit/abel/u1/irenebn/Scandinavia/sensitivity/METGRID.TBL
After metogrid has run, look for:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ! Successful completion of metgrid. ! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
and check that the following files are generated:
/work/users/irenebn/noahMP/met_em.d01.nc /work/users/irenebn/noahMP/METGRID.TBL /work/users/irenebn/noahMP/metgrid.log
It is good practice to check metgrid.log. It will give you useful information if something went wrong. But hopeully, it says:
*** Successful completion of program metgrid.exe ***
tail /work/users/irenebn/noahMP/metgrid.log
You'll get error messages more often than you'd like. This, for instance, means that you generated only one domain for geogrid and ungrib, but metgrid is looking for the second domain. To generate the second domain, change namelist.wps and rerun geogrid and ungrib, before running metgrid again.
2016-11-02 04:22:57.937 --- Processing domain 2 of 2 2016-11-02 04:22:57.946 --- ERROR: Screwy NDATE: 0000-00-00_00:00:00
Do also check the met_em.d01 files. They are NetCDF files, and can be viewed with Ncview.
module load ncview ncview /work/users/irenebn/noahMP/met_em.d01.2013-07-01_00\:00\:00.nc &
run_init_wrf.py (real.exe)
Real.exe takes care of the vertical interpolation. If you want to run both Noah-MP and CLM, you must run real.exe between each run, because they have different numbers of soil layers (Noah-MP has 4 soil layers and CLM has 10). Therefore, you must re-run real.exe between each run for sensitivity studies.
Open job_wrf.sh and comment out the metgrid line, and uncomment only the run_init line:
emacs -nw job_wrf.sh # run_metgrid.py --expid noahMP --tbl /usit/abel/u1/irenebn/Scandinavia/sensitivity/METGRID.TBL run_init_wrf.py --expid noahMP --namelist /usit/abel/u1/irenebn/Scandinavia/sensitivity/noahMP/namelist.input # run_wrf.py --expid noahMP --npes 64 --namelist /usit/abel/u1/irenebn/Scandinavia/sensitivity/noahMP/namelist.input
Then submit the job to Abel, to run real.exe
sbatch job_wrf.sh
Keep an eye on the slurm output, and rsl.error.0000
tail slurm-<tab complete until you find the last number>.out tail /work/users/irenebn/noahMP/rsl.error.0000
Don't be fooled by this message in the slurm output
Job 15848534 ("noahMP") completed on c5-30,c6-[29,34-35] at Wed Nov 2 10:21:10 CET 2016
because you might still find an error in rsl.error.0000, such as this
-------------- FATAL CALLED --------------- FATAL CALLED FROM FILE: <stdin> LINE: 849 input_wrf.F: SIZE MISMATCH: namelist ide,jde,num_metgrid_levels= 99 99 50; input data ide,jde,num_metgrid_levels= 99 99 38 -------------------------------------------
To check this, I look at the namelist.input in $WORKDIR, where it indeed says num_metgrid_levels = 50
less /work/users/irenebn/noahMP/namelist.input
Then I look at the metgrid files using ncdump -h, where it indeed days num_metgrid_levels = 38 ;
ncdump -h /work/users/irenebn/noahMP/met_em.d01.2013-07-01_00\:00\:00.nc
To solve this, I edit the namelist.input file -- both in $WORKDIR and my Abel home -- and try sumbitting job_wrf again.
emacs -nw /work/users/irenebn/noahMP/namelist.input e_vert = 50, 50, # <-- here's where I specify the number of layers I want p_top_requested = 1000, num_metgrid_levels = 38, # <-- here's where I specify the number of metgrid layers sbatch job_wrf.sh
run_wrf.py (wrf.exe)
wrf.exe runs the WRF model. It is good practice to run small time steps in the beginning, to allow for adjustments before the actual run. You may also generate restart files to allow continuing simulations after some (30?) days.
Open job_wrf.sh and comment out the run_init line, and uncomment only the run_wrf line:
emacs -nw job_wrf.sh # run_init_wrf.py --expid noahMP --namelist /usit/abel/u1/irenebn/Scandinavia/sensitivity/noahMP/namelist.input run_wrf.py --expid noahMP --npes 64 --namelist /usit/abel/u1/irenebn/Scandinavia/sensitivity/noahMP/namelist.input
Adding some lines of rsync to your jobscript makes sure that the files will be automatically copied to ASTRA (/projects/researchers/researchers01/<username>) when the job is completed. Note that rsl.error.0000 and rsl.out.0000 have the same name for real.exe and wrf.exe, so if you want to keep the real.exe rsl files, you should rename them before running wrf (cd /projects/researchers/researchers01/irenebn/noahMP/ mv rsl.out.0000 rsl.REAL.out mv rsl.error.0000 rsl.REAL.error).
# Copy your data on your local machine rsync -avz $WORKDIR/noahMP/namelist.input /projects/researchers/researchers01/irenebn/noahMP rsync -avz $WORKDIR/noahMP/namelist.output /projects/researchers/researchers01/irenebn/noahMP rsync -avz $WORKDIR/noahMP/rsl.out.0000 /projects/researchers/researchers01/irenebn/noahMP rsync -avz $WORKDIR/noahMP/rsl.error.0000 /projects/researchers/researchers01/irenebn/noahMP rsync -avz $WORKDIR/noahMP/wrfout* /projects/researchers/researchers01/irenebn/noahMP rsync -avz $WORKDIR/noahMP/wrfrst* /projects/researchers/researchers01/irenebn/noahMP
Then submit the job to Abel, to run wrf.exe
sbatch job_wrf.sh
Keep an eye on the slurm output, and rsl.error.0000
tail slurm-<tab complete until you find the last number>.out tail /work/users/irenebn/noahMP/rsl.error.0000
Running several simulations
One file per simulation on $WORKDIR
It is wise create one folders in your $WORKDIR for each simulation, matched by a folder in your Abel home directory. For any new simulation, I would create a copy of a directory in my home, and new folders on $WORKDIR and Astra:
cp /usit/abel/u1/irenebn/Scandinavia/sensitivity/noahMP /usit/abel/u1/irenebn/Scandinavia/sensitivity/noahUA # here's where I store namelists and job_wrf.sh mkdir /work/users/irenebn/noahUA # here's where the model is run mkdir /projects/researchers/researchers01/irenebn/noahUA # here's where I store the output (accessed by Sverdrup through /var/sshfs/irenebn)
Then, I'd create symbolic links to wrfinput files (or met_em-files, if I want to run real again), so I don't have to take up more space than needed.
cd /work/users/irenebn/noahUA ln -s ../noahMP/wrfinput_d01 ln -s ../noahMP/wrfbdy_d01 ln -s ../noahMP/wrflowinput_d01 for file in ../noahMP/met_em.d01.201*; do echo ln -s $file; done # remove the "echo" to create the links
Final remarks
Good luck!
-Irene Brox Nilsen