Note that there are some explanatory texts on larger screens.

plurals
  1. POCan you propose a more elegant way to 'tokenize' c# code for html formatting?
    primarykey
    data
    text
    <p><em>(<a href="https://stackoverflow.com/questions/174418/can-you-improve-this-lines-of-code-algorithm-in-f">This question</a> about refactoring F# code got me one down vote, but also some interesting and useful answers. And 62 F# questions out of the 32,000+ on SO seems pitiful, so I'm going to take the risk of more disapproval!)</em></p> <p>I was trying to post a bit of code on a blogger blog yesterday, and turned to <a href="http://manoli.net/csharpformat/" rel="nofollow noreferrer">this site</a>, which I had found useful in the past. However, the blogger editor ate all the style declarations, so that turned out to be a dead end.</p> <p>So (like any hacker), I thought "how hard can it be?" and rolled my own in &lt;100 lines of F#.</p> <p>Here is the 'meat' of the code, which turns an input string into a list of 'tokens'. Note that these tokens aren't to be confused with the lexing/parsing-style tokens. I did look at those briefly, and though I hardly understood anything, I did understand that they would give me <em>only</em> tokens, whereas I want to keep my original string.</p> <p>The question is: is there a more elegant way of doing this? I don't like the n re-definitions of s required to remove each token string from the input string, but it's difficult to split the string into potential tokens in advance, because of things like comments, strings and the #region directive (which contains a non-word character).</p> <pre><code>//Types of tokens we are going to detect type Token = | Whitespace of string | Comment of string | Strng of string | Keyword of string | Text of string | EOF //turn a string into a list of recognised tokens let tokenize (s:String) = //this is the 'parser' - should we look at compiling the regexs in advance? let nexttoken (st:String) = match st with | st when Regex.IsMatch(st, "^\s+") -&gt; Whitespace(Regex.Match(st, "^\s+").Value) | st when Regex.IsMatch(st, "^//.*?\r?\n") -&gt; Comment(Regex.Match(st, "^//.*?\r?\n").Value) //this is double slash-style comments | st when Regex.IsMatch(st, "^/\*(.|[\r?\n])*?\*/") -&gt; Comment(Regex.Match(st, "^/\*(.|[\r?\n])*?\*/").Value) // /* */ style comments http://ostermiller.org/findcomment.html | st when Regex.IsMatch(st, @"^""([^""\\]|\\.|"""")*""") -&gt; Strng(Regex.Match(st, @"^""([^""\\]|\\.|"""")*""").Value) // unescaped = "([^"\\]|\\.|"")*" http://wordaligned.org/articles/string-literals-and-regular-expressions | st when Regex.IsMatch(st, "^#(end)?region") -&gt; Keyword(Regex.Match(st, "^#(end)?region").Value) | st when st &lt;&gt; "" -&gt; match Regex.Match(st, @"^[^""\s]*").Value with //all text until next whitespace or quote (this may be wrong) | x when iskeyword x -&gt; Keyword(x) //iskeyword uses Microsoft.CSharp.CSharpCodeProvider.IsValidIdentifier - a bit fragile... | x -&gt; Text(x) | _ -&gt; EOF //tail-recursive use of next token to transform string into token list let tokeneater s = let rec loop s acc = let t = nexttoken s match t with | EOF -&gt; List.rev acc //return accumulator (have to reverse it because built backwards with tail recursion) | Whitespace(x) | Comment(x) | Keyword(x) | Text(x) | Strng(x) -&gt; loop (s.Remove(0, x.Length)) (t::acc) //tail recursive loop s [] tokeneater s </code></pre> <p>(If anyone is really interested, I am happy to post the rest of the code)</p> <p><strong>EDIT</strong> Using the <a href="https://stackoverflow.com/questions/228605/can-you-propose-a-more-elegant-way-to-tokenize-c-code-for-html-formatting/228609#228609">excellent suggestion</a> of <a href="http://blogs.msdn.com/chrsmith/archive/2008/02/21/Introduction-to-F_2300_-Active-Patterns.aspx" rel="nofollow noreferrer">active patterns</a> by kvb, the central bit looks like this, much better!</p> <pre><code>let nexttoken (st:String) = match st with | Matches "^\s+" s -&gt; Whitespace(s) | Matches "^//.*?\r?(\n|$)" s -&gt; Comment(s) //this is double slash-style comments | Matches "^/\*(.|[\r?\n])*?\*/" s -&gt; Comment(s) // /* */ style comments http://ostermiller.org/findcomment.html | Matches @"^@?""([^""\\]|\\.|"""")*""" s -&gt; Strng(s) // unescaped regexp = ^@?"([^"\\]|\\.|"")*" http://wordaligned.org/articles/string-literals-and-regular-expressions | Matches "^#(end)?region" s -&gt; Keyword(s) | Matches @"^[^""\s]+" s -&gt; //all text until next whitespace or quote (this may be wrong) match s with | IsKeyword x -&gt; Keyword(s) | _ -&gt; Text(s) | _ -&gt; EOF </code></pre>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload