secml_malware.attack.whitebox package

Subpackages

Submodules

secml_malware.attack.whitebox.c_discretized_bytes_evasion module

class secml_malware.attack.whitebox.c_discretized_bytes_evasion.CDiscreteBytesEvasion(end2end_model: secml_malware.models.c_classifier_end2end_malware.CClassifierEnd2EndMalware, index_to_perturb: list, iterations: int = 100, is_debug: bool = False, random_init: bool = False, threshold: float = 0.5, penalty_regularizer: float = 0, chunk_hyper_parameter: int = 256)

Bases: secml_malware.attack.whitebox.c_end2end_evasion.CEnd2EndMalwareEvasion

Creates the attack that perturbs the header of a Windows PE malware.

apply_feature_mapping(x: secml.array.c_array.CArray) secml.array.c_array.CArray

Applies the feature extraction

Parameters

x (CArray) – the input malware sample

Returns

the feature vector

Return type

CArray

compute_penalty_term(original_x: secml.array.c_array.CArray, adv_x: secml.array.c_array.CArray, par: float) torch.Tensor

Computes the penalty term as torch node

Parameters
  • original_x (CArray) – the original malware sample

  • adv_x (CArray) – the adversarial malware version

  • par (float) – the regularization parameter

Returns

a torch node of the graph containing the penalty

Return type

torch.Tensor

infer_step(x_init: secml.array.c_array.CArray) float

Return prediction w.r.t. the malware class

Parameters

x_init (CArray) – the sample to use for the forward step

Returns

the malware score

Return type

float

invert_feature_mapping(x: secml.array.c_array.CArray, x_adv: secml.array.c_array.CArray) secml.array.c_array.CArray

Invert the feature mapping

Parameters
  • x (CArray) – the original sample

  • x_adv (CArray) – the adversarial sample

Returns

the inverted feature mapping of the adv sample

Return type

CArray

loss_function_gradient(original_x: secml.array.c_array.CArray, adv_x: secml.array.c_array.CArray, penalty_term: torch.Tensor) torch.Tensor

Compute the gradient of the loss function of the target model

Parameters
  • original_x (CArray) – the original malware sample

  • adv_x (CArray) – the adversarial malware sample

  • penalty_term (torch.Tensor) – the penalty term

Returns

the gradient of the model w.r.t. input on the embedding layer

Return type

torch.Tensor

optimization_solver(E: torch.Tensor, gradient_f: torch.Tensor, index_to_consider: list, x_init: secml.array.c_array.CArray) secml.array.c_array.CArray

Optimizes the end-to-end evasion

Parameters
  • E (torch.Tensor) – the embedding matrix E, with all the embedded values

  • gradient_f (torch.Tensor) – the gradient of the function w.r.t. the embedding

  • index_to_consider (list) – the list of indexes to perturb

  • x_init (CArray) – the input sample to manipulate

Returns

the adversarial malware

Return type

CArray

Given the starting byte, the gradient and the embedding map,it returns a list of distances

Parameters
  • start_byte (int) – the starting byte for the search

  • gradient (torch.Tensor) – the gradient

  • embedding_bytes (torch.Tensor) – the embedding matrix with all the byte embedded

  • invalid_val (optional, default np.infty) – the invalid value to use. Default np.infty

  • invalid_pos (int, optional, default -1) – the position of the padding value.

secml_malware.attack.whitebox.c_end2end_evasion module

class secml_malware.attack.whitebox.c_end2end_evasion.CEnd2EndMalwareEvasion(end2end_model: secml_malware.models.c_classifier_end2end_malware.CClassifierEnd2EndMalware, indexes_to_perturb: list, iterations: int = 100, is_debug: bool = False, random_init: bool = False, threshold: float = 0.5, penalty_regularizer: float = 0, store_checkpoints: Optional[int] = None)

Bases: secml.adv.attacks.evasion.c_attack_evasion.CAttackEvasion

Base abstract class for implementing end-to-end evasion attacks against malware detectors.

abstract apply_feature_mapping(x) secml.array.c_array.CArray
abstract compute_penalty_term(original_x: secml.array.c_array.CArray, adv_x: secml.array.c_array.CArray, par: float) float
create_real_sample_from_adv(original_file_path: str, x_adv: secml.array.c_array.CArray, new_file_path: Optional[str] = None) bytearray

Create a real adversarial example

Parameters
  • original_file_path (str) – the original malware sample

  • x_adv (CArray) – the perturbed malware sample, as created by the optimizer

  • new_file_path (str, optional, default None) – the path where to save the adversarial malware. Leave None to not save the result to disk

Returns

the adversarial malware, as string of bytes

Return type

bytearray

f_eval()

Returns the number of function evaluations made during the attack.

grad_eval()

Returns the number of gradient evaluations made during the attack.

abstract infer_step(x_init) secml.array.c_array.CArray
abstract invert_feature_mapping(x, x_adv) secml.array.c_array.CArray
abstract loss_function_gradient(original_x: secml.array.c_array.CArray, adv_x: secml.array.c_array.CArray, penalty_term: torch.Tensor)
objective_function(x)

Objective function.

Parameters

x (CArray or CDataset) –

Returns

f_obj

Return type

float or CArray of floats

objective_function_gradient(x)

Gradient of the objective function.

abstract optimization_solver(E, gradient_f, index_to_consider, x_init) secml.array.c_array.CArray

secml_malware.attack.whitebox.c_extend_dos_evasion module

class secml_malware.attack.whitebox.c_extend_dos_evasion.CExtendDOSEvasion(end2end_model: secml_malware.models.c_classifier_end2end_malware.CClassifierEnd2EndMalware, pe_header_extension: int = 512, iterations: int = 100, is_debug: bool = False, random_init: bool = False, threshold: float = 0.5, penalty_regularizer: float = 0, chunk_hyper_parameter: Optional[int] = None)

Bases: secml_malware.attack.whitebox.c_format_exploit_evasion.CFormatExploitEvasion

DOS header extension attack

secml_malware.attack.whitebox.c_fast_gradient_sign_evasion module

class secml_malware.attack.whitebox.c_fast_gradient_sign_evasion.CFastGradientSignMethodEvasion(end2end_model: secml_malware.models.c_classifier_end2end_malware.CClassifierEnd2EndMalware, indexes_to_perturb: list, epsilon: float, iterations: int = 100, is_debug: bool = False, random_init: bool = False, threshold: float = 0.5, penalty_regularizer: float = 0, p_norm: float = inf, store_checkpoints: Optional[int] = None)

Bases: secml_malware.attack.whitebox.c_end2end_evasion.CEnd2EndMalwareEvasion

Creates the basic attack that implements the Fast Gradient Sign Method for the Windows malware domain. The original attack has been proposed by Goodfellow et al. (https://arxiv.org/abs/1412.6572)

apply_feature_mapping(x: secml.array.c_array.CArray)
compute_penalty_term(original_x: secml.array.c_array.CArray, adv_x: secml.array.c_array.CArray, par: float)
infer_step(x_init)
invert_feature_mapping(x, x_adv)
loss_function_gradient(original_x: secml.array.c_array.CArray, adv_x: secml.array.c_array.CArray, penalty_term: torch.Tensor)
optimization_solver(E, gradient_f, index_to_consider, x_init)

secml_malware.attack.whitebox.c_format_exploit_evasion module

class secml_malware.attack.whitebox.c_format_exploit_evasion.CFormatExploitEvasion(end2end_model: secml_malware.models.c_classifier_end2end_malware.CClassifierEnd2EndMalware, preferable_extension_amount: int = 512, pe_header_extension: int = 512, iterations: int = 100, is_debug: bool = False, random_init: bool = False, threshold: float = 0.5, penalty_regularizer: float = 0, chunk_hyper_parameter: Optional[int] = None)

Bases: secml_malware.attack.whitebox.c_discretized_bytes_evasion.CDiscreteBytesEvasion

create_real_sample_from_adv(original_file_path: str, x_adv: secml.array.c_array.CArray, new_file_path: Optional[str] = None) bytearray

Create a real adversarial example

Parameters
  • original_file_path (str) – the original malware sample

  • x_adv (CArray) – the perturbed malware sample, as created by the optimizer

  • new_file_path (str, optional, default None) – the path where to save the adversarial malware. Leave None to not save the result to disk

Returns

the adversarial malware, as string of bytes

Return type

bytearray

secml_malware.attack.whitebox.c_header_evasion module

class secml_malware.attack.whitebox.c_header_evasion.CHeaderEvasion(end2end_model: secml_malware.models.c_classifier_end2end_malware.CClassifierEnd2EndMalware, index_to_perturb: Optional[list] = None, iterations: int = 100, is_debug: bool = False, random_init: bool = False, optimize_all_dos: bool = False, threshold: float = 0, penalty_regularizer: int = 0)

Bases: secml_malware.attack.whitebox.c_discretized_bytes_evasion.CDiscreteBytesEvasion

Creates the attack that perturbs the header of a Windows PE malware.

secml_malware.attack.whitebox.c_kreuk_evasion module

class secml_malware.attack.whitebox.c_kreuk_evasion.CKreukEvasion(end2end_model: secml_malware.models.c_classifier_end2end_malware.CClassifierEnd2EndMalware, how_many_padding_bytes: int, epsilon: float, iterations: int = 100, is_debug: bool = False, threshold: float = 0.5, p_norm: float = inf, compute_slack: bool = True, store_checkpoints: Optional[int] = None)

Bases: secml_malware.attack.whitebox.c_fast_gradient_sign_evasion.CFastGradientSignMethodEvasion

secml_malware.attack.whitebox.c_padding_evasion module

class secml_malware.attack.whitebox.c_padding_evasion.CPaddingEvasion(end2end_model: secml_malware.models.c_classifier_end2end_malware.CClassifierEnd2EndMalware, how_many: int, iterations: int = 100, is_debug: bool = False, random_init: bool = False, threshold: float = 0, penalty_regularizer: int = 0)

Bases: secml_malware.attack.whitebox.c_discretized_bytes_evasion.CDiscreteBytesEvasion

Constructs an attack object that append one byte at time.

secml_malware.attack.whitebox.c_shift_evasion module

class secml_malware.attack.whitebox.c_shift_evasion.CContentShiftingEvasion(end2end_model: secml_malware.models.c_classifier_end2end_malware.CClassifierEnd2EndMalware, preferable_extension_amount=512, iterations: int = 100, is_debug: bool = False, random_init: bool = False, threshold: float = 0.5, penalty_regularizer: float = 0, chunk_hyper_parameter: Optional[int] = None)

Bases: secml_malware.attack.whitebox.c_format_exploit_evasion.CFormatExploitEvasion

Content shifting attack

secml_malware.attack.whitebox.c_suciu_evasion module

class secml_malware.attack.whitebox.c_suciu_evasion.CSuciuEvasion(end2end_model: secml_malware.models.c_classifier_end2end_malware.CClassifierEnd2EndMalware, how_many_padding_bytes: int, epsilon: float, is_debug: bool = False, threshold: float = 0.5, compute_slack: bool = True)

Bases: secml_malware.attack.whitebox.c_kreuk_evasion.CKreukEvasion

Module contents