Scikit Modules#
- class brainmaze_eeg.scikit_modules.FeatureAugmentorModule#
Feature augmentation using an ‘augment_features’ function from the ‘PiesUtils’ package. See the code for additional details.
- fit(X=None, Y=None)#
- fit_transform(X, Y=None)#
- transform(X)#
- class brainmaze_eeg.scikit_modules.Log10Module#
- fit(X, Y=None)#
- fit_transform(X, Y=None)#
- transform(X, Y=None)#
- class brainmaze_eeg.scikit_modules.LogModule#
- fit(X, Y=None)#
- fit_transform(X, Y=None)#
- transform(X, Y=None)#
- class brainmaze_eeg.scikit_modules.PCAModule(var_threshold=0.98)#
- fit(X, y=None)#
Fit the model with X.
- Parameters:
X ({array-like, sparse matrix} of shape (n_samples, n_features)) – Training data, where n_samples is the number of samples and n_features is the number of features.
y (Ignored) – Ignored.
- Returns:
self – Returns the instance itself.
- Return type:
object
- fit_transform(X, y=None)#
Fit the model with X and apply the dimensionality reduction on X.
- Parameters:
X ({array-like, sparse matrix} of shape (n_samples, n_features)) – Training data, where n_samples is the number of samples and n_features is the number of features.
y (Ignored) – Ignored.
- Returns:
X_new – Transformed values.
- Return type:
ndarray of shape (n_samples, n_components)
Notes
This method returns a Fortran-ordered array. To convert it to a C-ordered array, use ‘np.ascontiguousarray’.
- class brainmaze_eeg.scikit_modules.PCAModuleSVD(var_threshold=0.98)#
- fit(X, Y=None)#
- fit_transform(X, Y=None)#
- transform(X, Y=None)#
- class brainmaze_eeg.scikit_modules.ZScoreModule(trainable=False, continuous_learning=False, multi_class=False)#
Z-score normalization compatible with scikit.pipeline.Pipeline Enables continuous learning - enabling continuous adaptation.
- Modes
Zscore normalization
- Zscore normalization with fixed mean and std values based on the initial training dataset
Possible category-wise normalization with mean and std values estimated from the training dataset - number of features is multiplied by number of categories
- Zscore normalization with an initial mean and std values trained on the training dataset - adaptation during inference
https://stats.stackexchange.com/questions/211837/variance-of-subsample
- continuous_learning#
If true - An instance updates mean and variance values during each prediction step. Initial outlier filtering is recommended
- Type:
bool
- trainable#
If false - An instance normalizes inference data based on their current mean value and std If true - An instance remembers mean and variance values of training data
- Type:
bool
- multi_class#
If true - An instance performs normalization for each training class separately Number of output features is multiplied by a number of training categories
- Type:
bool
- mean#
Trained mean values for each feature. In case multi_class == True -> list of numpy ndarrays for each category
- Type:
numpy ndarray / list
- std#
- Type:
numpy ndarray
- N#
- Type:
int
- fit(X=None, Y=None)#
- Parameters:
X (numpy ndarray) – shape[n_samples, n_features]
Y (list or numpy array, optional) – category reference for each sample - required only for option with multi_class normalization
- Return type:
None
- fit_transform(X=None, Y=None)#
- Parameters:
X (numpy ndarray) – shape[n_samples, n_features]
Y (list or numpy array, optional) – category reference for each sample - required only for option with multi_class normalization
- Returns:
transformed_data – shape[n_samples, n_features]
- Return type:
numpy ndarray
- transform(X=None)#
- Parameters:
X (numpy ndarray) – shape[n_samples, n_features]
- Returns:
transformed_data – shape[n_samples, n_features]
- Return type:
numpy ndarray