Version: 0.16.8SparkGoogleCloudStorageDatasource
- class great_expectations.datasource.fluent.SparkGoogleCloudStorageDatasource(*, type: Literal['spark_gcs'] = 'spark_gcs', name: str, id: Optional[uuid.UUID] = None, assets: List[great_expectations.datasource.fluent.file_path_data_asset._FilePathDataAsset] = [], bucket_or_name: str, gcs_options: Dict[str, Union[great_expectations.datasource.fluent.config_str.ConfigStr, Any]] = )
- add_csv_asset(name: str, *, id: Optional[uuid.UUID] = None, order_by: List[great_expectations.datasource.fluent.interfaces.Sorter] = None, batch_metadata: Dict[str, Any] = None, batching_regex: Pattern = re.compile('.*'), connect_options: Mapping = None, header: bool = False, InferSchema: bool = False) → pydantic.BaseModel