I consider this a deeply thought-out book that anticipated and spelled out its trivial criticisms that Sam has since responded to for some reason. I wouldn’t bother waffling with Blackford et al, I would answer the criticism that the moral problems are not solvable in principle, which I don’t think they are. Anyone can have a go now, with fabricated data. Some worked examples would settle it.
However, if you accept his presuppositions, which I think one should, whether one ‘objectively’ should or not, he missed the solution in the final chapter. The monist neuroscientist knows we live in a virtual reality. If you don’t want to live in a virtual reality, then I don’t know what you mean, and neither do you. The solution is to have x virtual realities, where x is the number of conscious creatures.
This way, every conscious creature can achieve maximal well-being. Naturally the futile ‘suffering’ would be done by zombie minds. etc. I don’t think this needs a push as it will come by exaption anyway. But the whole book seemed to be building up to this correct solution and ended before the logic had been followed all the way.