Upgrading Debian wheezy to jessie
I recently upgraded my Debian instances from wheezy to jessie. Here's what happened...
Silver
No problems. Boot/shutdown is much faster with systemd than the sysv init system.
Amber
  Everything appeared to go well, until I actually rebooted after the
  dist-upgrade piece.  No login prompt was issued on the
  console.  I could type and characters were echoed, but that was it.
  However, I could switch to other virtual ttys, so jessie had come
  up. It appeared that systemd was lauching lightdm
  (which I do not remember installing).  I removed lightdm,
  which got me ttyv1 back, but I could not start X ("No
  screens available").
  Figuring there might be something wrong in the interaction between
  the radeon driver and the graphics card, I tried installing
  fglrx to replace it.  That made no difference, It wasn't
  until I removed fglrx and all its works, and also removed
  /etc/X11/xorg.conf, that X started working as before.  I am
  not totally sure what happened here, but I am glad it is working
  now.
Gold
On gold, the install also appeared to go well, but on booting into the jessie kernel, I started seeing masses of IO errors on the second drive, a Western Digital 120GB device. This is the second time the install of an operating system seemed to destroy hardware!.
May 20 08:10:49 gold kernel: sd 2:0:0:0: [sdb] Unhandled sense code May 20 08:10:49 gold kernel: sd 2:0:0:0: [sdb] May 20 08:10:49 gold kernel: Result: hostbyte=DID_ERROR driverbyte=DRIVER_SENSE May 20 08:10:49 gold kernel: sd 2:0:0:0: [sdb] May 20 08:10:49 gold kernel: Sense Key : Hardware Error [current] May 20 08:10:49 gold kernel: sd 2:0:0:0: [sdb] May 20 08:10:49 gold kernel: Add. Sense: No additional sense information May 20 08:10:49 gold kernel: sd 2:0:0:0: [sdb] CDB: May 20 08:10:49 gold kernel: Read(10): 28 00 03 c0 07 80 00 00 08 00 May 20 08:10:50 gold kernel: sd 2:0:0:0: [sdb] Unhandled sense code May 20 08:10:50 gold kernel: sd 2:0:0:0: [sdb] May 20 08:10:50 gold kernel: Result: hostbyte=DID_ERROR driverbyte=DRIVER_SENSE May 20 08:10:50 gold kernel: sd 2:0:0:0: [sdb] May 20 08:10:50 gold kernel: Sense Key : Hardware Error [current] May 20 08:10:50 gold kernel: sd 2:0:0:0: [sdb] May 20 08:10:50 gold kernel: Add. Sense: No additional sense information May 20 08:10:50 gold kernel: sd 2:0:0:0: [sdb] CDB: May 20 08:10:50 gold kernel: Read(10): 28 00 03 c0 07 80 00 00 08 00 May 20 08:10:50 gold kernel: sd 2:0:0:0: [sdb] Unhandled sense code
Replacing the harddrive with a spare silenced the errors, so it was nothing to do with the controller or cabling. I have a kit which adds a USB interface to an IDE drive, so I used this to mount the WD drive in Windows and ran the Western Digital Data LifeGuard Diagnostics. The quick test passed fine, but the extended test aborted with "Too many errors." Clearly the drive had died. It is now in the great disk graveyard.
VMWare
  The upgrade to jessie from wheezy was painless.  The major problem
  was implementing the VMWare tools on a Linxu 3.16-based system. When
  attempting to install the VMWare tools, using
  vmware-install.pl, the vmhgfs module (responsible for
  mounting the host's disks on the VM) would not compile.  Googling
  led me to this
  post. Liayn had raised the issue and provided diffs for the
  fixes. I also found I had to patch innode.c.  In order to
  apply the patches, you need to un-tar the vmhgfs.tar file in
  vmware-tools-distrib/lib/modules/source/.  Once the
  patches are applied, replace the original vmhgfs.tar file with the
  tarred contents of vmhgfs-only directory. You should then be able to
  run the vmware-install.pl script without the compilation
  error.
The diffs are as follows:
diff vmhgfs-only/file.c vmhgfs-only-new/file.c 153c153 < #if defined VMW_USE_AIO --- > #ifdef VMW_USE_AIO 158c158,162 < #else --- > #else /* !VMW_USE_AIO */ > #if LINUX_VERSION_CODE >= KERNEL_VERSION(3, 16, 0) > .read_iter = generic_file_read_iter, > .write_iter = generic_file_write_iter, > #endif 161c165 < #endif --- > #endif /* !VMW_USE_AIO */ 887d890 < 929c932,934 < --- > #if LINUX_VERSION_CODE >= KERNEL_VERSION(3, 16, 0) > result = new_sync_read(file, buf, count, offset); > #else 930a936 > #endif 980c986,988 < --- > #if LINUX_VERSION_CODE >= KERNEL_VERSION(3, 16, 0) > result = new_sync_write(file, buf, count, offset); > #else 981a990 > #endif
diff vmhgfs-only/link.c vmhgfs-only-new/link.c 185a186 > #if LINUX_VERSION_CODE <= KERNEL_VERSION(3, 14, 99) 186a188,190 > #else > error = readlink_copy(buffer, buflen, fileName); > #endif
diff vmhgfs-only/shared/compat_fs.h vmhgfs-only-new/shared/compat_fs.h 92c92,93 < #if LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 19) --- > #if LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 19) && \ > LINUX_VERSION_CODE < KERNEL_VERSION(3, 16, 0)
  diff vmhgfs-only/inode.c vmhgfs-only-new/inode.c
  1920c1920
  <                            d_alias) {
  ---
  >                            d_u.d_alias) {
  1973c1973
  <          struct dentry *dentry = list_entry(pos, struct dentry, d_alias);
  ---
  >          struct dentry *dentry = list_entry(pos, struct dentry, d_u.d_alias);
  The vmware tools install process also warns you that vmxnet
  is no longer supported and that vmxnet3 had to be used
  instead. I didn't know I was using the vmxnet driver.  In fact, I
  wasn't. VMware was making a PCNET32 ethernet adapter available to
  Debian.  However, the VMWare tools init.d script was trying to start
  vmxnet and failing, which caused systemd to report a degraded
  system.
The way to specify the network adpater VMWare should use is via the .vmx file, via a definition of the form:
ethernet0.virtualDev = "vmxnet3"
  This didn't work; no ethernet adapter was detected by Linux.  I
  eventually figured out that the Hardware version of the VM was too
  old (4, rather than the 10 of a newly created VM).  I created a new
  VM, using the option to add the operating system later, deleted the
  original hard disk, copied the .vmdk file from the old VM to the new
  VM location and added the copied disk as the new VM disk.  VMWare
  asked if I wanted to upgrade the disk, so I said yes.  Now the
  setting of the ethernet adapter worked and vmxnet3 was being
  used.
 
    
     
        