It's simpler in PowerShell where there's no limit. For files that aren't very big you can use this one-liner
$ (Get-Content -Raw ./in.txt) -split '(.{10})' -ne '' | Set-Content out.txt
# Or the shortened version
$ (gc -Ra in.txt) -split '(.{10})' -ne '' >out.txt
Of course it's better to write the entire script in PowerShell but if you really can't then you can simply call it from cmd or a batch file like this
powershell -C "(gc -Ra in.txt) -split '(.{10})' -ne '' >out.txt"
This method reads the whole file and split into 10-character strings using the .{10} regex so it won't work for files like GBs big. In such cases of huge files you can use this
$ Get-Content -AsByteStream -ReadCount 10 ./in.txt | `
ForEach-Object { [Text.Encoding]::ASCII.GetString($_) } | `
Set-Content out.txt
# Or the shortened version
$ gc -A -Re 10 in.txt |% { [Text.Encoding]::ASCII.GetString($_) } >out.txt
This will read the input file as a byte stream and then grab every 10 bytes and print as string. That means there's no limit in line length.
Remember to select the correct encoding of your files by replacing [Text.Encoding]::ASCII with
[Text.Encoding]::GetEncoding("windows-1252") (the default charset in US Windows), or
[Text.Encoding]::GetEncoding("iso-8859-1")... depending on whether your input files are in CP1252, ISO-8859-1, or other encodings... You can simply check the encoding in Notepad++.
For UTF-8 and UTF-16 you'll need [Text.Encoding]::UTF8 and
[Text.Encoding]::Unicode but this won't quite work for UTF because of the variable multibyte encoding. You can use this solution instead:
Get-Content ./in.txt | ForEach-Object {
$line = $_
for ($i = 0; $i -lt $line.Length; $i += 10) {
$line.Substring($i, [Math]::Min(10, $line.Length - $i))
}
}
You can call from cmd like above, or add some options like this to speed up the startup time
powershell -NoProfile -ExecutionPolicy Bypass -NoLogo -NonInteractive -Command "gc -A -Re 10 in.txt |% { [Text.Encoding]::ASCII.GetString($_) } >out.txt"