diff --git a/README.md b/README.md
index 85d9c9e23aed90b9b82b21929e35df78997f1f2c..09c60e3ba0f37b69700991a587d6a5bad3c9ce9b 100644
--- a/README.md
+++ b/README.md
@@ -14,52 +14,52 @@ for further statistical analyses.
 
 # Installation instructions
 
-We recommend using a conda environment. Install instructions from the public
-enstools repo serve as base and have been adapted and extended.
+We recommend using a conda environment and the provided `environment.yml` file
+to create the environment.
 
+    git clone https://gitlab.com/Christoph.Fischer/enstools-feature.git
+    cd enstools-feature
+    conda env create --name feature2 --file=environment.yml
+    conda activate feature2
 
-    conda create --name enstools-feature python=3.7
-    conda activate enstools-feature
+And pull enstools directly from the repo:
 
-    # install requirements listed in given venv_setup.sh
-    pip install --upgrade pip
-    pip install wheel numpy==1.20.0
+    pip install -e git+https://github.com/wavestoweather/enstools.git@main#egg=enstools --no-deps
 
-    # integrate enstools
-    pip install -e git+https://github.com/wavestoweather/enstools.git@main#egg=enstools
+Then, to also develop new techniques, `enstools-feature` should be installed editable:
 
-    # install requirements for enstools-feature, and install enstools in this environment
-    conda install --file requirements.txt
     pip install -e .
 
 
-Additionally, depending on the used feature identification strategies, additional packages may be required. // TODO
-
 # Usage: Applying existing strategies
 
-Here is a usage example, if you want to apply existing strategies in the code base to your data set.
+Each of the implemented strategies has a usage example under `feature/identification/[strategy]/run_identify.py`. The general setup based on the existing template is as follows:
+
 First, we need some imports, namely the 
 * `FeaturePipeline`, which executes the identification pipeline
 * `IdentificationTemplate`, this is the identification strategy, edit this accordingly
 * `TrackingTemplate`, this is the tracking strategy, edit this accordingly
 * `template_pb2`, this is the on run auto-generated protobuf python file from the set description. Use the one that matches your detection strategy. They are named *_pb2, where * is the name of the identification module. 
--> TODO: should not really need to set the template here, is specific to identification strategy!
 
-    from enstools.feature.pipeline import FeaturePipeline
-    from enstools.feature.identification.template import IdentificationTemplate
-    from enstools.feature.tracking.template import TrackingTemplate
-    from enstools.feature.identification._proto_gen import template_pb2
+We start by importing them:
+
+    from feature.pipeline import FeaturePipeline
+    from feature.identification.template import IdentificationTemplate
+    from feature.tracking.template import TrackingCompareTemplate
+    from feature.identification._proto_gen import template_pb2
 
-Then, we initialize the pipeline with the protobuf description and optional the processing mode. For 3D data, this resembles if identification should be performed individual on 2D (latlon) or 3D subsets.
+Then, we initialize the pipeline with the protobuf description and optional the processing mode and number of parallel workers. The processing mode defines whether the identification should be performed on 2D (each level separately) or 3D subsets of data.
 
-    pipeline = FeaturePipeline(template_pb2, processing_mode='2d')
+    pipeline = FeaturePipeline(template_pb2, processing_mode='2d', workers=16)
 
 Then, we initialize and set our strategies. The tracking can be set to `None` to be ignored.
 
     i_strat = IdentificationTemplate(some_parameter='foo')
     t_strat = TrackingCompareTemplate()
+    t_strat.set_threads(16)
+
     pipeline.set_identification_strategy(i_strat)
-    pipeline.set_tracking_strategy(t_strat) # or None as argument if no tracking
+    pipeline.set_tracking_strategy(t_strat)
 
 Next, set the data to process.
 
@@ -68,33 +68,35 @@ Next, set the data to process.
 Then, the pipeline can be executed, starting the identification and subsequently the tracking.
 
     pipeline.execute()
-    # or separated...
-    # pipeline.execute_identification()
-    # pipeline.execute_tracking()
 
-This generates an object description based on the set protobuf format. If tracking has been used, tracks based on a default simple heuristic can be generated. See docstrings for further details. The object description holds the objects, and if tracking has been executed a graph structure and the generated tracks respectively.
+This generates an object description based on the set protobuf format. If tracking has been used, the tracking also yields a list of tracks. See the docstrings for further details on the internal procedure. The object description holds the objects, and if tracking has been executed a graph structure and the generated tracks respectively.
 
-    pipeline.generate_tracks()
-    od = pipeline.get_object_desc()
+    od = pipeline.get_feature_desc()
 
+This structure can also be transformed into a `DataGraph`, see the template `run_identify.py` for an example.
 The output data set and description can be saved:
 
     pipeline.save_result(description_type='json', description_path=..., dataset_path=...)
 
 
-Some of the identification strategies we provide include:
-- `african_easterly_waves`: Identify AEWs based on an approach similar to [https://doi.org/10.1002/gdj3.40](Belanger et al. (2016))
+Some of the (partly quite specific) identification strategies we provide include:
+- `african_easterly_waves`: Identify AEWs based on an approach similar to [Belanger et al. (2016)](https://doi.org/10.1002/gdj3.40). Generated wave trough data for the ERA-5 reanalysis is available [here](https://zenodo.org/doi/10.5281/zenodo.8403743).
+- `aew_vortices` (experimental): Identify vortices in African Easterly Waves by detecting critical points in the streamline field.
+- `aew_pv`: Identify Potential Vorticity features within African Easterly Waves. This requires already identified wave troughs. The data has been computed for 20 years of ERA-5 and is available [here](https://zenodo.org/doi/10.5281/zenodo.10061471). A publication is available soon describing the methodology.
 - `overlap_example`: Simple starting point to identify objects which should later be tracked via overlap. It creates a new field and writes `i` at positions where object with ID `i` has been identified.
-- `pv_streamer`: Identify PV anomalies in 2D (streamers) or 3D, see [https://doi.org/10.5194/gmd-2021-424](Fischer et al. (2022))
+- `pv_streamer`: Identify PV anomalies along the tropopause in 2D (streamers) or 3D, see [https://doi.org/10.5194/gmd-2021-424](Fischer et al. (2022))
+- `storm`: Simple identification on storms by pressure thresholds. There are more sophisticated and better algorithms out there, this rather serves as a show case.
 - `template` is the starting template for use. If you want to identify areas and track them via overlap, you can use `overlap_example` instead.
+- `theshold`: Template for identification of features via single or double thresholding of a field.
 
 Some of the tracking strategies we provide include:
 - `african_easterly_waves`: Tracking of AEWs, by comparing location of line strings.
 - `overlap_tracking`: General overlap tracking. It takes the name of the `DataArray` as parameter, ideally one where the values represents the object's id at the location. It works well together with the `overlap_example` identification.
-- `template_object_compare`: Template for tracking, where the tracking strategy is solely based on pairwise comparison of object descriptions from consecutive timesteps.
+- `template_feature_compare`: Template for tracking, where the tracking strategy is solely based on pairwise comparison of object descriptions from consecutive timesteps.
 - `template`: Template for a fallback tracking strategy which requires more complex heuristics than above mentioned ones.
 
 # Usage: Adding strategies
+## Identification
 
 We provide some template files, which we recommend as a starting point for your own identification strategy. If you want to add your own identification (and tracking) strategy to the framework, you need to:
 - Copy over the template folder and rename it and the files accordingly. If you implement a tracking method, which relys on pairwise comparison of objects from consecutive timesteps, you can use the `template_object_compare`
@@ -103,17 +105,16 @@ We provide some template files, which we recommend as a starting point for your
 - In the `identification.py` (`tracking.py`), implement your identification (tracking) strategy. See the template again for a useful example. There are a few methods:
  - `__init__` gets called from the run script, so the user can set parameters for the algorithm here.
  - `precompute` is called once for the entire data set. The data set can be altered here (temporally and spatially). Also if the strategy should return an additional field (`DataArray`), it should be initialized here as shown in the template.
- - In `identify` goes your identification strategy. This method is called in parallel, and should return a list of objects. See the template and the docstrings for more information. It returns the provided subset (which can be modified in terms of values), and a list of objects. New (empty) objects can be obtained using `o = self.get_new_object(id=obj_id)`, returning an object `o` with the set ID `o.id` and the object properties defined via the protobuf description at `o.properties`.
+ - In `identify` goes your identification strategy. This method is called in parallel, and should return a list of objects and optionally the altered data set. See the template and the docstrings for more information. It returns the provided subset (which can be modified in terms of values), and a list of objects. New (empty) objects can be obtained using `o = self.get_new_feature(index_accessor)`, returning an object `o` with the set ID `o.id` and the object properties defined via the protobuf description at `o.properties`.
  - `postprocess` is called once for the entire data set after identification. The data set and the object description can be changed here.
 
-* TODO tracking
+## Tracking
+TODO
 
 # Acknowledgment and license
 
-`enstools.feature` is a collaborative development within
+`enstools.feature` is a software within
 Waves to Weather (SFB/TRR165) project, and funded by the
 German Research Foundation (DFG).
 
-A full list of code contributors can [CONTRIBUTORS.md](./CONTRIBUTORS.md). TODO
-
 The code is released under an [Apache-2.0 licence](./LICENSE).