High-Performance CSV Generation Library for .NET
Enterprise dashboards and SaaS applications frequently require robust data extraction capabilities. While traditional spreadsheet formats remain popular, comma-separated values (CSV) are often prefrered when the primary objective is lightweight data transport, rapid parsing, or strict schema independence. The following outlines a .NET-based implementation designed to streamline CSV creation while addressing common formatting and performance constraints.
Core capabilities include:
- Sequential appending for multi-stage batch operations.
- Dynamic header assignment and remapping.
- Automatic string coercion for numeric identifiers exceeding 15 digits, preventing spreadsheet applicatoins from incorrectly zero-padding long keys.
Initialization & Dependency Injection
Add the package to your project and register the export engine within your service collection. This enables constructor injection across your application layers.
public void ConfigureServices(IServiceCollection services)
{
services.AddSingleton<icsvexportengine>();
}
private readonly ICsvExportEngine _exportEngine;
public ReportController(ICsvExportEngine engine)
{
_exportEngine = engine;
}</icsvexportengine>
Column Mapping Strategies
1. Explicit Dictionary Mapping for Anonymous Types
When exporting ad-hoc data without predefined models, pair a collection of dynamic objects with a header-to-property dictionary. This approach decouples the output structure from the source data schema.
var sourceRecords = new List<dynamic>
{
new { UserAlias = "Sully_A", TxnId = 8473920184739201847 },
new { UserAlias = "Ben_B", TxnId = 9928374650192837465 }
};
var schemaMap = new Dictionary<string, string>
{
{ "OperatorID", "UserAlias" },
{ "ReferenceKey", "TxnId" }
};
await _exportEngine.ExportAsync(sourceRecords, schemaMap, @"C:\exports\ledger.csv");
2. Fluent Builder with Value Transformation
For strongly-typed entities, a fluent API simplifies column definition and property binding. The engine also exposes a transformation hook, enabling runtime formatting of specific fields before serialization.
var dataSet = FetchRecords();
var columnDefinition = _exportEngine
.Bind<TransactionRecord>("OperatorID", r => r.Id)
.Bind<TransactionRecord>("FullName", r => r.DisplayName)
.Bind<TransactionRecord>("IsActive", r => r.Enabled)
.Bind<TransactionRecord>("ProcessedDate", r => r.Timestamp)
.CompileMapping();
_exportEngine.PreWriteFormatter = (headerName, sourceField, incomingValue) =>
{
if (incomingValue == null) return string.Empty;
if (headerName == "ProcessedDate" && incomingValue is DateTime date)
{
return date.ToString("dd-MMM-yyyy");
}
return incomingValue.ToString();
};
await _exportEngine.ExportAsync(dataSet, columnDefinition, @"C:\exports\ledger_v2.csv");
3. Declarative Annotation Mapping
Projects favoring configuration-as-code can utilize custom attributes directly on model properties. The exporter scans these annotations to automatically construct the column layout without manual dictionary assembly.
public class TransactionRecord
{
[CsvColumn("OperatorID")]
public int Id { get; set; }
[CsvColumn("FullName")]
public string DisplayName { get; set; }
[CsvColumn("ProcessedDate")]
public DateTime Timestamp { get; set; }
}
// Execution
var typedCollection = new List<TransactionRecord>
{
new TransactionRecord { Id = 401, DisplayName = "Alice", Timestamp = DateTime.UtcNow }
};
await _exportEngine.ExportWithAnnotationsAsync(typedCollection, @"C:\exports\ledger_v3.csv");
Note: Supplying an invalid file path or an empty dataset returns an empty byte array without creating a physical file on disk.
Memory-Efficient Disk Streaming
Loading hundreds of thousands of rows into memory before serialization triggers excessive garbage collection and potential out-of-memory failures. To handle enterprise-scale datasets, the exporter provides a streaming method that pairs with a database reader. This architecture processes rows sequentially and flushes them directly to the storage drive.
[HttpGet("bulk-export")]
public async Task<IActionResult> GenerateBulkReport()
{
var targetPath = Path.Combine("Exports", "historical_data.csv");
Directory.CreateDirectory(Path.GetDirectoryName(targetPath));
var connString = await File.ReadAllTextAsync("config/db_connection.txt");
var queryText = await File.ReadAllTextAsync("config/query.sql");
using var connection = new MySqlConnection(connString);
await connection.OpenAsync();
using var command = new MySqlCommand(queryText, connection);
using var reader = await command.ExecuteReaderAsync();
Func<IDataReader, TransactionRecord> mapper = DataParser.GetMapper<TransactionRecord>();
await _exportEngine.StreamToFileAsync(targetPath, reader, rdr => mapper(rdr));
return Ok("Stream write completed.");
}
This sequential I/O pattern maintains a constant memory footprint regardless of total row volume, making it ideal for scheduled reporting pipelines and large-scale data migrations.