Attending: Heather, Paul, Julian, Alice, Tony, Nevil, Adam (drop off a few minutes early), Robert, Dave
1. v3 vocabulary update
Paul's perspective - there will be a good tool, there is no expectation that v3 is going to be the required submission format any time soon, the intention that starting a v3 doc should be easier than a v2 document. That third point is the point of disagreement between Julian and Paul.
The implicit part of that is that we can make breaking changes if those breakages will make things easier to start, edit, or add new semantics. So far, adding semantics has not created a breaking change. But simplifying, like in tables and references, could be done with breaking change. Julian would like us hard not to make breaking changes if we can avoid them. Paul is not as concerned because he assumes we will have a converter.
Julian - are we talking about changes that would break a v2 processor, or breaking v2 documents that would be processed by a v3 processor. What he is concerned about old documents failing on new processors. We should simply where we can simplify, but haven't seen any example where simplifying the vocal requires breaking the v2 documents. Can we get a complete example?
Going back one step - a v2 document breaking under a v3 processor. Wouldn't at that point a v3 document pull in a converter, do the conversion, then try? Yes, it could do that, but the question is if we expect the v3 processor to do that, then what we are saying is a v3 document will accept v2 documents. That is not significantly different from what Julian is after. From a software point of view, a v3 processor would accept both types of documents. It does mean that an author with a large v2 document only has the choice to convert it to v3 before he can take advantage of v3 features. If v3 is a proper superset of v3, we can just start adding v3 to the documents and the transition will be simpler.
The concrete example is lists that look just like HTML (see rfc-interest thread). Julian agrees that when we fix our lists, we should model the HTML list model (ordered lists and definition lists). It is tempting to say we will just adopt the HTML elements, but the results would be very confusing since we would now have a mix of HTML and not. For lists, the best approach would be to look at what we have and how that relates to the HTML list model, and fix what we have so we have parity in the semantic elements, which is not the same as having the same elements.
The biggest difference that inside, whether it is ol or list style=ol, we currently have t's, but then we have t but not really t if we don't have a new bullet. And Julian's proposal is to wrap another layer inside of that which is semantically fine but harder to use. Paul wants to make the inside of the list easier instead.
Julian looked at the HTML in more detail today, and they have a similar challenge they solve with a list item element which takes inline text or paragraphs. We could adopt that without breaking what we have. Paul says that makes things more difficult for writers.
Robert - back in the day of when he wrote really big documents, can't see this is going to be as simple as Paul suggests in doing an automated + manual conversion.
Julian - conversion probably won't be that hard, but if we change things unnecessarily, it will open up bike shedding to more changes than are really necessary. This is a social concern, not a technical concern.
Heather - there must be a middle ground of allowing changes but halting gratuitous changes.
Robert - doesn't think it is as simple as running a converter for a few large documents. They may be outliers. A position of we will work hard to maintain backward compatibility and you will have to argue hard against that makes sense. Provide concrete examples and let us work from them allows us to open a richer conversation without undue constraints.
Paul - if we say you can propose breaking ideas and we will discuss, that's fine. But don't want to say “you are proposing a breaking change, so we'll do something else instead to try and avoid the breaking change”. People who write the processors don't get a say; the authors do.
Tony - would rather encourage backward compatibility but not be constrained by it.
Julian - one lesson from the HTML evolution, if you have a successful markup vocabulary, you try to avoid versioning with this kind of vocabulary. xml2rfc is a very constrained thing compared to HTML, but even so, if we can avoid versioning and making it clear to the processor, that would be better.
2. SVG profile update
All Nevil has done so far is make an xml2rfc version of what he's said on the calls. There are several questions in the drat, including one about linking. What do you think about allowing links from a diagram to a part of the diagram somewhere else? Might it be reasonable to link only within the RFC?
Alice: is there any example where an author has wanted to do something like that?
Nevil: not as far as he knows; this came out of reading the W3C docs.
Paul: in a protocol flow diagram, when you hit another layer, someone would turn that into a link or anchor to another RFC.
Julian: if we have complex documents, we should not make it impossible to have links between these documents. If we allow links within graphics, they should be allowed somewhere else. There is a stability question?
Robert: how is this different that providing a URL in a reference?
Nevil: it is the same, but we already have problems with stability of references.
Robert: but we do allow them, and we do apply judgement as to whether those links are stable.
Nevil: then the RFC Ed team would have to check these things within drawings. In text it is easy to see where they are, but that would be less easy in a diagram.
Paul: maybe say links are allowed but only to anchors to other points in the document, and with the exception that you could add a fragment component? No - Sometimes you have a ref to a document that is a container and the part you add to the URL is a file name plus a fragment. Have thought of a few more reasons this won't really work, so not a good rule. We can create a tool that would automatically check, finding links to allow for the RFC Editor to follow up on.
Nevil - still intends an appendix with element names and attributes allowed. Will need a tool to check that the profile has been followed and is compatible with the RFC SVG profile, something that could be on the website letting people check chunks of SVG.
One other open question - tools like Inkscape produce bloated SVG with detail that does not need to be there. Maybe someday someone could make a tool to strip out what's not necessary (but that probably doesn't need to go in this document).
Paul - that tool should also be fairly easy. “If you give me an SVG, I will give you back an SVG with the bad stuff stripped out - is this what you want to see?” This doesn't go into the doc, but it is something to consider for the spec.
What about normative images? It may depend on the format of the images - example, PostScript images are normative
3. PDF to HTML mapping
There are obviously a bunch of holes in it, many place holders. Comments would be particularly useful regarding areas not covered, mentioned, and without placeholders.
4. IETF 89 I know we will have a BoF, don't have date/time yet. BoF will be both “here's the guidance I'm documenting for use of non-ASCII” and an update on where we are with the format work.
I am concerned about the requirements draft for HTML - haven't seen any updates on that. It does look like we will have SVG, PDF→HTML, v2 vocabulary and v3 vocabulary in a good state for community education. The last thing that will be needed is the SoW that pulls all this together to say “we are looking for tool(s) that create these things (final XML file, HTML, PDF, TXT output), using this vocabulary (v3), checks for these profiles (SVG). Will this be enough to write the spec? Robert - this will probably be ok, but we won't really know until we get in and start grinding on detail. You get through that first round of SoW just fine, 85-90% of the work, but the extra 10% will take grinding.
One of the things that would be useful for people to think about (not just the Design Team) is what the RFC Editor Info Page will be structured. A lot of people still assume that the way you will get an RFC is with a URL that ends in .txt, .html, etc. People aren't really thinking in terms of multiple outputs and may want to change their decision. The question is whether the canonical URI is something without a file extension, and the web server will do the right thing? People will have strong and differing opinions on that. Asking that question will help get people focusing on all the formats, not just the one they care about.
If we have a date or an RFC number for cut over, Julian would wish that old RFC will serve as txt, and new RFC will be served as HTML to the web browser, with a URI that does not contain a file extension.
For Paul, if calling RFC7500, would want the info page. This discussion should definitely happen, and would be useful to get people thinking.
Robert - thought we were talking about going ahead to create an example, a prototype of the canonical formats and the derived formats, the mockups and the differences you will see between the things. We need an RFCdummy. Paul - we do need this, but probably need to get farther with the format yet.
What is the canonical URL, and what do you get when you resolve it. Doing a mockup with that will be useful. Maybe think about this in the Toronto timeframe.
Something to consider - one of the things we have wrestled with re: tool, etc, was to make this a decision that a user could influence with a cookie.