UgenTec.Framework.Core.IO QuickStart
UgenTec.Framework.Core.IO is a library package containing abstractions for handling file io uniformly for cloud stored (azure blob storage) and files stored on local disk.
Installation
Use the internal Ugentec Nuget-2 feed to install the UgenTec.Framework.Core.
install-package UgenTec.Framework.Core.IO
Usage
This package was designed to provide an abstraction for file io which can transparently swap between local disk and azure blob storage. As such the API has a 'cloud first' perspective, making the 'container' a first class citizen. For local disk, this container translates into a regular direcory.
Registering filesystem dependencies
Registering the default file system provider can be done in the global startup of the application by means of simple dependency injection configuration
public void ConfigureServices(IServiceCollection services)
{
services.AddScoped<IFileSystem>(ctx =>
{
var connectionString = ""; // logic to obtain the correct connectionstring needs to be injected here.
// this can be config based, or through communication with the admin service
return new AzureBlobStorageFileSystem(connectionString, ctx.GetService<ILogger<AzureBlobStorageFileSystem>>());
});
// OR
services.AddScoped<IFileSystem>(ctx =>
{
var basePath = @"c:\ugentec\testapp"; // Configure the base folder where all application data is stored here
return new LocalDiskFileSystem(basePath, ctx.GetService<ILogger<LocalDiskFileSystem>>());
});
}
We expect file systems to always be wrapped in a domain oriented repository or store class (eg. CassetteStore, AssayRepository). For cases where more fine grained control is needed over the file system used by these repositories or stores dependency injection needs to be leveraged.
Eg. This LocalDiskCachedCassetteStore always needs a local disk storage, even if the default file system would be Azure Blob Storage
public class LocalDiskCachedCassetteStore : ICassetteStore
{
public LocalDiskCachedCassetteStore(ILocalDiskFileSystem fileSystem)
{
}
}
To be able to configure this a new marker interface can be introduced (in this case the ILocalDiskFileSystem
which extends the IFileSystem and can be used in DI configuration :
public void ConfigureServices(IServiceCollection services)
{
services.AddScoped<ILocalDiskFileSystem>(ctx =>
{
var basePath = @"c:\localcache\"; // Base folder where we expect all usages of the ILocalDiskFileSystem to store data
return new LocalDiskFileSystem(basePath, ctx.GetService<ILogger<LocalDiskFileSystem>>());
});
services.AddScoped<ICassetteStore,LocalDiskCachedCassetteStore>();
}
Working with streamed file uploads and downloads
Streaming has been introduced in version 10.0.0 to reduce application memory usage.
Streaming has been made possible with the OpenReadAsync and WriteAsync<T> methods.
Atomic uploads are also still possible by passing True to the atomic flag of WriteAsync<T>. When performing atomic uploads, snapshotting is not necesarry.
Implementation of streaming is up to the stores in each application, and can look like this:
public class TestObjectStore : ITestObjectStore {
public const string StorageContainer = "a_container";
private readonly IFileSystem _fileSystem;
private readonly ISerializer<TestObject> _serializer;
public TestObjectStore(TenantSpecificFileSystemFactory fileSystemFactory) {
_fileSystem = fileSystemFactory(StorageType.Premium);
_serializer = new JsonSerializer<TestObject>();
}
public async Task InsertAsync(TestObject data) {
var fileName = GetFileName(data.Guid);
// Get concurrency marker of existing file
await _fileSystem.WriteAsync(
data,
_serializer,
StorageContainer,
fileName,
WriteBehavior.Insert,
streamed: true
);
}
public async Task UpdateAsync(TestObject data) {
var fileName = GetFileName(data.Guid);
var originalConcurrencyMarker = (await _fileSystem.GetMetadata(StorageContainer, fileName)).ConcurrencyMarker;
await _fileSystem.WriteAsync(
data,
_serializer,
StorageContainer,
fileName,
WriteBehavior.Update(originalConcurrencyMarker),
streamed: true
);
}
public async Task<TestObject> ReadAsync(Guid testObjectGuid){
var fileName = GetFileName(testObjectGuid);
using(var fileStreamWithMetadata = await _fileSystem.OpenReadAsync(StorageContainer, fileName)) {
return _serializer.Deserialize(fileStreamWithMetadata.Stream);
}
}
}
❗ IMPORTANT ❗ Streamed uploads will have an impact on Event Subscriptions in Azure Storage Accounts. When performing a streamed upload, 2 BlobCreated events will be triggered. One where data.api is "PutBlob" but with a data.contentLength of 0 (See example below). And a second one where data.api is "PutBlockList" with the correct data.contentLength.
Atomic uploads will still only trigger 1 BlobCreated event where data.api is "PutBlob" but with the correct data.contentLength.
This means we should introduce filtering on our Event Subscriptions to ignore messages where data.api equals "PutBlob" and data.contentLength equals 0.
{
"topic": "<redacted>",
"subject": "<redacted>",
"eventType": "Microsoft.Storage.BlobCreated",
"id": "<redacted>",
"data": {
"api": "PutBlob",
"clientRequestId": "<redacted>",
"requestId": "<redacted>",
"eTag": "0x8D9D9C0CC6D7FAD",
"contentType": "application/octet-stream",
"contentLength": 0,
"blobType": "BlockBlob",
"url": "<redacted>",
"sequencer": "<redacted>",
"storageDiagnostics": {
"batchId": "<redacted>"
}
},
"dataVersion": "<redacted>",
"metadataVersion": "1",
"eventTime": "2022-01-17T13:53:53.6322221Z"
}