From e572ab8dc5c6856311e6031a6461d82b13af6ef9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Jes=C3=BAs=20P=C3=A9rez=20Lorenzo?= Date: Mon, 27 Jan 2025 19:24:19 +0000 Subject: [PATCH] chore: add and fix text --- README.md | 12 +++++------- about.md | 19 ++++++++++++------- 2 files changed, 17 insertions(+), 14 deletions(-) diff --git a/README.md b/README.md index 5559abb..312b773 100644 --- a/README.md +++ b/README.md @@ -7,11 +7,9 @@ include_toc: true Based in [PrefSPEC: Performance Profiling-based Proactive Security Policy Enforcement for Containers](https://ieeexplore.ieee.org/document/10577533) document presented in [1], this repository contains source files used to generate and process data. -Main Reference: [PrefSPEC document](PerfSPEC.pdf) as [White paper](https://en.wikipedia.org/wiki/White_paper) - -[Presentación in Spanish](presentacion.pdf) - -[How to install](https://repo.jesusperez.pro/jesus/perfspec-learning/src/branch/main/install.md) covers basic enviroment,tools, and recommendations. +- Main Reference: [PrefSPEC reference document](PerfSPEC.pdf) as [White paper](https://en.wikipedia.org/wiki/White_paper) +- [Presentación in Spanish](presentacion.pdf) +- [How to install](https://repo.jesusperez.pro/jesus/perfspec-learning/src/branch/main/install.md) covers basic enviroment,tools, and recommendations.
@@ -38,7 +36,7 @@ There are additional documents to this: - [Quick start](installation.md) and installation - [Intro](intro.md) about why and what is done - [About](about.md) goals and experiences -- [Presentation](presentacion.pdf) slides to explain process and enviroment +- [Presentation in Spanish](presentacion.pdf) slides to explain process and enviroment > [!NOTE] > It is considered that __event data collection__ in `raw-audit-logs.log.xz` are realistic and representative to simulate @@ -97,7 +95,7 @@ If you wish to [collect](collect) your own dataset, there are several source fil `data/main-audit-logs.log` Data logs fixed and clean `data/actions-dataset-audit.txt` Source content for learning models -`data/actions_distribution.pdf` Generated graph view of actions / events distribution +`data/actions_distribution.pdf` Autogenerated graph view of actions and events distribution ## Data Models diff --git a/about.md b/about.md index 0980afb..62bd56b 100644 --- a/about.md +++ b/about.md @@ -7,9 +7,14 @@ include_toc: true Based in [PrefSPEC: Performance Profiling-based Proactive Security Policy Enforcement for Containers](https://ieeexplore.ieee.org/document/10577533) document presented in [1], this repository contains source files used to generate and process data. -[PrefSPEC document](PerfSPEC.pdf) +For more info use: -[Presentación in Spanish](presentacion.pdf) +- [PrefSPEC main description](Readme.md) +- [PrefSPEC introduction](intro.md) +- [PrefSPEC reference document](PerfSPEC.pdf) +- [Presentación in Spanish](presentacion.pdf) +- [How to install](https://repo.jesusperez.pro/jesus/perfspec-learning/src/branch/main/install.md) +- [Autogenerated graph view of actions and events distribution](actions_distribution.pdf)
@@ -18,14 +23,14 @@ Based in [PrefSPEC: Performance Profiling-based Proactive Security Policy Enforc # What is done so far ? - [X] Good look and feel and interactions among processing, analisys and presentation layers -- [X] Using better software packages management like [uv](https://docs.astral.sh/uv/) to complement Python **pip** -- [X] A notebook open like [Marimo](https://marimo.io/) to support alternative dataframes engines like [Polars](https://pola.rs/) rather than [Pandas](https://pandas.pydata.org/) +- [X] Use better software packages management like [uv](https://docs.astral.sh/uv/) to complement Python **pip** +- [X] Use an >open and compatible notebook like [Marimo](https://marimo.io/) to support alternative dataframes engines like [Polars](https://pola.rs/) rather than [Pandas](https://pandas.pydata.org/) - [X] Use settings and structures to play with different settinigs and options -- [X] Implement a customize one [LSTM](https://en.wikipedia.org/wiki/Long_short-term_memory) model within notebooks -- [X] Use of different metrics to apply to the training models with custumezable adjustments +- [X] Implement one customized [LSTM](https://en.wikipedia.org/wiki/Long_short-term_memory) model within notebooks +- [X] Use of different metrics to apply to the training models with custumezable adjustments and checkpoints - [X] Use notebooks as python scripts for command-line use cases like collect predictions or automatic train models - [X] Use [Dry](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) to reuse code and centralize common tasks like settings or loading resources. This is main use and existence of [lib_perfspec.py](learning/python/lib_perfspec.py) -- [X] Spliting basic tasks among seveal specific **notebooks**: +- [X] Splitting basic tasks among several specific **notebooks**: - **Preprocessing data** collection to generate clean and usefull (critical actions) info to train models and ranking [prepare_perfspec.py](learning/python/prepare_perfspec.py) - **Train models** to get predictions [train_perfspec.py](learning/python/train_perfspec.py) - **Get predictions** from existing models [run_perfspec.py](learning/python/run_perfspec.py)