AT command notes

March 15th, 2018

so it is that time of the decade again, that you need to poke with some AT modem commands, e.g. for 3G / LTE networks, …

Does the SIM card need a PIN?

Enter the PIN if required:

Current network:

Update: special service numbers, e.g. balance:

List of early computing systems [WIP]

March 12th, 2018

I think we initially had a 286 without HD, nor color graphics
my father’s 386sx25 w/ 2MB RAM for what feels forever
gifted free NEC V20 XT clone thing
Pentium 120
IDT WinChip2 240
AMD K7 Athlon 600
AMD K7 Athlon 1GHz?
Sun Ultra 5
iBook G3 750?

GCC becomes slower and slower

January 18th, 2018

As visible on my other posts, also on twitter and instagram I’m working on some vintage machines with our #t2sde the other weeks. Now only did the new GCC versions feel slower and slower, where even EPYC datacenter servers took like twice as long to bootstrap some $sysroot, … I did a quick mips64 build and install to the R10000 mips64 Sgi Octane. A hello-world.c compile is like 20% slower from 4.9.4 to 7.2.0 (N32 user-land):

# gcc –version
gcc (GCC) 4.9.4
# time gcc hello.c
user 0m1.080s


# gcc –version
gcc (GCC) 7.2.0
# gcc hello.c
user 0m1.290s

glibc minimum linux kernel version

December 31st, 2017

Note to self:

glibc-2.13: at least 2.6.12… ok (mips64)
glibc-2.19: minimum kernel version reset to 2.6.16 (mips64)
glibc-2.21: at least 2.6.32 (mips64)

to be extended.

Also, turns out the FP NAN representation was recently changed for IEEE 754-2008 on MIPS around Linux kernel version 4.5.0, and glibc 2.23.

Update: On a similar note: GCC 4.4 now supports the MIPS R10K, R12K, R14K and R16K processors.

Update 2: i386 removed with Linux kernel 3.8, last glibc without NPTL for i386 LinuxThreads: 2.3.6?

Update 3: sparc23 sun4c removed with Linux kernel 3.5.

low-level format a spinning hard drive

December 27th, 2017

On this vintage Unix workstation machine I still got one of those spinning SCSI drives. The one in the SPARCstation 2 –spinning with 7200rpm, from 1999!– had some bad blocks at the end. First I partitioned it so that the OS would not touch them, but as I wanted to re-install a new, slightly different T2 build I wanted to try to get rid of this bad blocks. From the spec it sounds like those old drives may only re-map reserve spare blocks on low-level format, as opposed to any write like modern disk drives do. “Flawed sector reallocation at format time”, however, the document also mention “Programmable auto write and read reallocation”, “Reallocation of defects on command (Post format)” and even “Full automatic read and write reallocation” hm, … confusing.

Anyways, I did not really wanted to do a longer term install with this bad blocks so I tried the sgutils’s sg_format for the first time ever. Little bit of a scary thing, and you should certainly not do this light hearted. After issues the SCSI FORMAT command, the drive is busy and won’t respond to regular SCSI commands. I run for an hour, so I guess it was stuck ad some bad area. I turned if off, guessing this may render it bricked, and it came back online without responding to SCSI READ and WRITES, … I issues another FORMAT; in the hope it may complete, and after only some minutes it did, ..! Yay, good luck.

So do not try this too easy and too often. I still have to re-read the drive to see if it still gives read errors, or if the reallocation re-mapping of reserve sectors was successful.

CPU support lost in the Linux kernel

December 23rd, 2017

For those enjoying tinkering with vintage, retro computer gear: Linux was the kernel and OS supposedly supporting every CPU, smart toaster and coffee machine under the sky.
Unfortunately with all the high massive parallel, performance state of the art tinkering some vintage maintenance burden was recently dropped over the years. Case in point: the original Intel 80386 which lacks the CMPXCHG instruction introduced with the i486, in the Linux kernel 3.8. And also the early Sun SPARC v7, Cypress, which even lacked hardware multiply and divide, somewhere around the kernel release 3.4, … :-/

Update: It also becomes increasingly difficult to configure kernels less than ~2.6MB required for booting on ancient sparc32 Sun machines, … :-(

Update 2: those apparently also required “special” SCSI CD-ROM drives supporting 512 bytes sized sector reads, as opposed to 2048 sized sectors as used by standard PC drives, ..?

Resetting Sun idprom nvram

December 21st, 2017

Note to self. Before it disappears from the interwebs. When your Sun idprom nvram battery dies.

The following was tested on a Sun SPARCstation 2 (sun4c) and Ultra 5 (sun4u):

Hit `n’ to get the new openboot prompt (it probably tries network booting):

f idprom@ 1 xor f mkp # this will invalidate the checkum
8 0 20 13 de ad c0ffee mkpl

Hit ctrl-D then ctrl-R, if you do NOT see a Sun Copyright notice, it worked, otherwise it failed.
You can check with:


You probably want to set some sane defaults, and disable the diag mode, to skip the excessively long memory test each time you boot:

setenv diag-switch? false

#sun #ultrasparc #idprom #refresh #openfirmware #exactcode#t2sde #berlin

A post shared by RenéRebe™ (@renerebe) on

Update: A new clock chip also needs to be started. From what I read the old SunOS might have code in the kernel clock driver to do that, but AFAICS the Linux kernel does not. This (untested) OpenFirmware code sequence supposedly starts a new clock chip on sun4c:

2000000 obio 0 map-page # map NVRAM to page 0
80 7f8 c! # set write bit
0 7f9 c! # reset stop bit
80 7fb c! # set kick start
0 7f8 c! # reset write bit

#wait for two seconds
80 7f8 c! # set write bit
0 7fb c! # reset kick start

0 7f9 c! # set dummy time and date
0 7fa c! # (if necessary)
0 7fb c!
4 7fc c!
11 7fd c!
1 7fe c!
96 7ff c!

0 7f8 c! # reset write bit

remapping bad spinning disk storage blocks

December 18th, 2017

Your good, old-fashioned rotating hard disk storage starts to develop bad sectors?

Dec 17 10:49:47 server kernel: end_request: I/O error, dev sdb, sector 300037184

One of the most easiest, quick and dirty ways to remap them on Linux (e.g. easier than fumbling with dd if= of=)?

Double check:

hdparm –read-sector 300037184 /dev/sdb

And if it is the block and still fails:

hdparm –write-sector 300037184 /dev/sdb –yes-i-know-…

Obviously this zeros the sector and all 512 or 4096 bytes that lived at that place are gone forever, and give way for fresh zeros from a spare, remapped reserve block.

Use only when you know what you are doing, your milage may vary.

Update: If you init a fresh Linux MD RAID, you may want to increase the min speed limit to get things going into production a bit faster:

echo 100000 > /proc/sys/dev/raid/speed_limit_min

Update2: If you are running in some error correcting RAID mode (e.g. not striped RAID 0 ;-) the Linux code will apparently re-write sectors and thus already automatically trigger a remap of those sectors:

end_request: I/O error, dev sdb, sector 301373665
ata2: EH complete
raid1:md0: read error corrected (8 sectors at 301373600 on sdb1)
raid1: sdb1: redirecting sector 301373600 to another mirror

Recompress Update 17.11

November 27th, 2017

After releasing our initial PDF Re/compress we received praise by first customers and users - and one popular questions: Can you actually reduce the quality much, much more?!

Our initial Re/compress will go thru all the PDF’s objects, and re-writes them in a much more compact and compressed way and also potentially recover and fix some broken files. It would also allow to reduce compression quality and down-sample the image’s resolution.

However, some interested inquires intentionally wanted way worse, smaller, and thus faster to load files. One of the most popular reason? AutoCAD CAD drawing! Those users usually use some print to PDF driver that usually results in tens (if not hundreds) of thousands of vector segments, but also potentially many small, few pixel sized (inline, sigh) image dots from 3d renderings and such. Those would usually not compress very much in our original version. This files also actually cause popular PDF viewers, like Apple’s Preview and naturally even more so Adobe’s Acrobat to hang while it was drawing all this many page content for seconds - panning and zooming was also not a very snappy affair.

Meet Re/compres 17.11 - our first major update: A newly developed “Rasterize pages to bitmap graphics” pass will convert this huge amounts of objects to just a single, highly compressed image. Using the down-sample resolution option you can create a new compressed file, intentionally with “photocopier” like reduced quality. Particularly useful when you want to mail documents to public tenders and potential clients without them having to expose all the fine, zoomable details of the original vector file!

We hope Re/compress and all its features can help you in your daily office workflow, and if you have any other wish or inquiry just let us know, too!

Re/compress PDF.

Apple’s macOS Preview default to 100% scale

November 8th, 2017

You sometimes need to print documents, invoices, boarding passes, whatever? Using Apple’s macOS and tired of having to choose: “Scale: 100%” to have an accurate printout instead of the often arbitrary default of: “Scale to Fit: 97%” or 98% (likely due to content on margins outside of your printer’s printable page size)?

defaults write PVImagePrintingScaleMode 0
defaults write PVImagePrintingAutoRotate 0

Yep. The famous Apple usability and attention to details ;-)