DM546 - Compiler Construction
 
Spring 2019
Kim Skak Larsen

Home

In Connection with the Exam
How much am I supposed to cover during the presentation?
At the final lecture, I will give a sketch of how all questions may be organized. It is important to point out that the speed you use at the exam should be much higher than the speed used at the lectures. At the lectures, material is explained to students who, except for preparation, do not know the material in advance and should learn it. At the exam, the material is presented to professionals with the purpose of demonstrating hos much the student knows. Choose interesting material, i.e., do not give too much of an introduction and do not dwell on trivial special cases.

Is there any reason to prepare my own examples if you're giving the examples I should use anyway?
For the topic you draw, you get to give a presentation, including using your own examples. After this part, when I start asking questions to other topics in the curriculum, I will provide examples to discuss when relevant.

Do I need to know the bison tool for the exam?
I won't ask you about the bison tool as such, so you don't need to read the manual or anything like that. You need to know how to make a bison tool yourself, i.e., you have to be able to explain the algorithms for taking a grammar, constructing a DFA for it, and extracting a parser tabel from the DFA.

We just had labs - do we need to know all this assembler stuff for the exam or is it just for the students having their bachelor project in this topic?
Everything in this course is relevant for all students. It is not important to remember the concrete syntax, but you need a really good understanding of many aspects of assembler programming. To discuss code generation for functions, you need to know the protocols for how functions are called, and even for generating code for an expression which is just a variable, you need to know how to generate code that finds the value of that variable at runtime. A good understanding of the target language is also required for liveness analysis, garbage collection, and peep-hole optimization. Thus, this background is highly relevant for code generation and fairly important for three other questions, i.e., a total of four out of seven questions. So, you definitely did not waste your time going to the labs! The bachelor project students will be forced to learn this (after the exam) to complete their projects, and they will waste a lot of time later if they didn't spend time on this now.

What does "top-down" and "bottom-up" mean?
These are the traditional terms for two different parsing techniques. They refer to the way in which the parser tree is built during the parsing of the input string. I have used these terms in the lectures, but the book has chosen other ways of naming the techniques. In the book, the corresponding terms are "predictive" (Section 3.2) and "LR" (Section 3.3). For "top-down", I'm particularly interested in hearing about LL(1) and for "bottom-up", the interesting topic is LR(1). Don't waste time on explaining LR(0) or SLR, but note that some definitions and algorithms of relevance to LR(1) are given in those sections.

Is it correct, as the book indicates, that recursive descent is the same as predictive parsing?
As it often happens, different authors do not agree how to use the terms, and when they start involving the concept of backtracking as well, it becomes even less clear. If one allows full backtracking, then algorithms turn into exhaustive search and it does not make sense to call them anything else at that point. So, let us assume that there is no backtracking. Then the recursive descent approach is just one way to implement a predictive parser. It is the term for the approach where one writes mutually recursive functions, one function for each nonterminal, and a number of cases in each function equal to the number of different rules with the given nonterminal as the left side of the rule. Another way of implementing predictive parsing without using recursive descent would be to use a parser table.

When you run Algorithm 3.13 on an example, you sometimes get sequences like Y2, ..., Y1?
This is standard mathematical notation. If you consider a sequence Y1, ..., Yk and look at some example subsequences, then Yi, ..., Yj is
  • Yi, Yi+1, if j = i+1.
  • Yi, if j = i.
  • the empty sequence, if j = i-1. This includes the case where k=0.

I think that the book's coverage of code generation is very different from your coverage at the lectures!
Yes, the approach is somewhat different. For the oral exam, my recommendation is that you use Supplementary Notes for DM546 as your starting point; see also my explanations in connection with the curriculum.

What can you talk about with respect to optimization?
The purpose of optimization and the different forms of optimization is a good starting point. Peep-hole optimization ought to be covered, including termination of the process (termination function); it could be an advantage to demonstrate this termination argument on an interesting example, i.e., an example which among other things contain patterns that actually increase the size of the code; see Supplementary Notes for DM546. If you can add some of your own examples of how templates used for different parts of the code generation can give rise to non-optimal code that we can deal with in a peep-hole process, that would be excellent. If you feel you run out of material, it is also fine to give a brief account of the register allocation problem at the end, even though this is a question in its own right.

 


   Data protection at SDUDatabeskyttelse på SDU