[rfc-i] Proposed way forwards on backward compatibility with v2
dhc at dcrocker.net
Tue Feb 18 07:33:48 PST 2014
On 2/18/2014 7:04 AM, Ted Lemon wrote:
> On Feb 18, 2014, at 9:16 AM, Riccardo Bernardini
> <framefritti at gmail.com> wrote:
>> To me "Occasional hand-hacking" meant "apply corrections by hand
>> in those cases when the formatting tools do not a good work."
> I don't want to belabor the point, but where does it end? Widow and
> orphan support? Font sizes?
The question "where does it end" actually entirely misses the point.
We have a working platform. It's worked for a long time. We have a
community trained to use it.
The question should not be "where do we stop in what we remove?" but
"why is it essential to remove anything?".
The pain of learning /additional/ capabilities is fundamentally
different (and fundamentally less) that the pain of changing existing
behaviors (and applying those changes to a roughly 40-year base of
To justify breaking the installed base, there should be a clear and
compelling explication of what is broken, why it is considered broken,
and why it is essentially to remove it.
This is merely the normal burden imposed when considering change to an
existing, operational system. Any operational system, anywhere that
folks worry about reliable service.
> The thing that stimulated this discussion was vspace, which some
> people seem to think is important, and some seem to want to get rid
> of. My personal opinion about vspace is that it's a mistake,
> because its purpose is to work around brokenness in the rendering
> code. The right thing to do is fix the brokenness.
This interpretation of 'broken' is quite common in a laboratory
engineering discussion, of course. And in terms of theoretical models,
it's quite correct: Bits like vspace violate a clean model of
specification abstraction, by introducing grungy presentation formatting
But that use of the term 'broken' ignores operational realities. When
there is a running system/service, the term is usually applied to
operational failures, not theoretical warts.
The theory-based use use relies on a clean-sheet approach to discussing
work, rather than an approach that pays attention to a 40-year
operational history and a diverse, installed base of users and software.
It also presumes perfect control over both.
> E.g., if you have a figure that you are trying to align at the top of
For any construct that violates a clean model, it is entirely reasonable
to argue its ugliness and even its insufficiency.
But that misses the point that a) it's been around for a long time, and
b0 it's been useful for a long time.
Arguing that it isn't perfect theoretically or sometimes even
practically misses that it has already proved useful for a long time.
> So the right way to address that point is to be able to specify that
> a figure doesn't get broken, and let the layout engine figure out
> where to put it.
If we didn't have decades of experience showing that a pure abstraction
model won't succeed, your proposed experiment would be reasonable to
What I don't understand is claiming that this time around this topic,
the IETF will get it right, when others who have worked specifically on
the formatter topic haven't been able to.
> The reason you need these tweaks in LaTeX is because you have no hope
> of fixing LaTex if it's broken in some way. Write-only code. We
> should avoid that.
The folk who did LaTex and other abstraction engines were not ignorant,
lazy or silly. I always thought that the community view was that they
were actually quite good.
So it seems at least a bit awkward to dismiss their experience in the
area of their specialty (and not ours.)
More information about the rfc-interest