how to extract the wiki code instead of html from a page on wikipedia in c#?












2














Any ideas how to download the wiki code that shows up on a Wikipedia page when you click "edit" on one of the Wikipedia pages? Example:



//EXAMPLE:

using System.Net;

public void download() {
string page = "https://en.wikipedia.org/w/index.php?title=Albatross&action=edit";

using (WebClient client = new WebClient())
{
string htmlCode = client.DownloadString(page);
// how to get the wiki code in the html edit box here?
}









share|improve this question






















  • it appears to be within a textarea, so find the textarea in your response and then get the content within it
    – ADyson
    Nov 20 '18 at 19:45






  • 1




    Use action=raw, see How to download the wikicode of a Wikipedia page?
    – wimh
    Nov 20 '18 at 19:49






  • 1




    What have you tried so far? Where is your concrete problem? Have you inspected the response?
    – Markus Safar
    Nov 20 '18 at 19:49
















2














Any ideas how to download the wiki code that shows up on a Wikipedia page when you click "edit" on one of the Wikipedia pages? Example:



//EXAMPLE:

using System.Net;

public void download() {
string page = "https://en.wikipedia.org/w/index.php?title=Albatross&action=edit";

using (WebClient client = new WebClient())
{
string htmlCode = client.DownloadString(page);
// how to get the wiki code in the html edit box here?
}









share|improve this question






















  • it appears to be within a textarea, so find the textarea in your response and then get the content within it
    – ADyson
    Nov 20 '18 at 19:45






  • 1




    Use action=raw, see How to download the wikicode of a Wikipedia page?
    – wimh
    Nov 20 '18 at 19:49






  • 1




    What have you tried so far? Where is your concrete problem? Have you inspected the response?
    – Markus Safar
    Nov 20 '18 at 19:49














2












2








2







Any ideas how to download the wiki code that shows up on a Wikipedia page when you click "edit" on one of the Wikipedia pages? Example:



//EXAMPLE:

using System.Net;

public void download() {
string page = "https://en.wikipedia.org/w/index.php?title=Albatross&action=edit";

using (WebClient client = new WebClient())
{
string htmlCode = client.DownloadString(page);
// how to get the wiki code in the html edit box here?
}









share|improve this question













Any ideas how to download the wiki code that shows up on a Wikipedia page when you click "edit" on one of the Wikipedia pages? Example:



//EXAMPLE:

using System.Net;

public void download() {
string page = "https://en.wikipedia.org/w/index.php?title=Albatross&action=edit";

using (WebClient client = new WebClient())
{
string htmlCode = client.DownloadString(page);
// how to get the wiki code in the html edit box here?
}






c# .net wikipedia-api






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 20 '18 at 19:43









Bill Moore

1,6751118




1,6751118












  • it appears to be within a textarea, so find the textarea in your response and then get the content within it
    – ADyson
    Nov 20 '18 at 19:45






  • 1




    Use action=raw, see How to download the wikicode of a Wikipedia page?
    – wimh
    Nov 20 '18 at 19:49






  • 1




    What have you tried so far? Where is your concrete problem? Have you inspected the response?
    – Markus Safar
    Nov 20 '18 at 19:49


















  • it appears to be within a textarea, so find the textarea in your response and then get the content within it
    – ADyson
    Nov 20 '18 at 19:45






  • 1




    Use action=raw, see How to download the wikicode of a Wikipedia page?
    – wimh
    Nov 20 '18 at 19:49






  • 1




    What have you tried so far? Where is your concrete problem? Have you inspected the response?
    – Markus Safar
    Nov 20 '18 at 19:49
















it appears to be within a textarea, so find the textarea in your response and then get the content within it
– ADyson
Nov 20 '18 at 19:45




it appears to be within a textarea, so find the textarea in your response and then get the content within it
– ADyson
Nov 20 '18 at 19:45




1




1




Use action=raw, see How to download the wikicode of a Wikipedia page?
– wimh
Nov 20 '18 at 19:49




Use action=raw, see How to download the wikicode of a Wikipedia page?
– wimh
Nov 20 '18 at 19:49




1




1




What have you tried so far? Where is your concrete problem? Have you inspected the response?
– Markus Safar
Nov 20 '18 at 19:49




What have you tried so far? Where is your concrete problem? Have you inspected the response?
– Markus Safar
Nov 20 '18 at 19:49












2 Answers
2






active

oldest

votes


















1














Update without HAP
As per wimh's comment. Simply appending &action=raw as a query string lets you do the job without scraping.



using System;
using System.Net.Http;

public class Program
{
private static HttpClient client = new HttpClient();

public static void Main()
{
var response = client.GetAsync("https://en.wikipedia.org/w/index.php?title=Albatross&action=edit&action=raw").Result;
var rawEditCode = response.Content.ReadAsStringAsync().Result;

Console.WriteLine(rawEditCode);
}
}


Fiddle: https://dotnetfiddle.net/NwZC3I



Original Answer
You could use HtmlAgilitypack and simply scrape it:



using System;
using HtmlAgilityPack;

public class Program
{
public static void Main()
{
HtmlWeb web = new HtmlWeb();
HtmlDocument html = web.Load("https://en.wikipedia.org/w/index.php?title=Albatross&action=edit");

var editorContent = html.DocumentNode.SelectSingleNode(@"//textarea[@id='wpTextbox1']").InnerHtml;
Console.WriteLine(editorContent);
}
}


dotNetFiddle: https://dotnetfiddle.net/fmsT1m






share|improve this answer































    0














       string GetWikiCode(string topic)
    {
    string htmlCode = "";
    string url = "https://en.wikipedia.org/w/index.php?title="
    + topic + "&action=raw";
    Console.WriteLine(String.Format("Downloading: {0}", url));
    using (WebClient client = new WebClient())
    {
    htmlCode = client.DownloadString(url);
    }
    string delimit = new string { "n", "rn" };
    string result = htmlCode.Split(delimit,
    StringSplitOptions.RemoveEmptyEntries);
    return result;
    }





    share|improve this answer





















      Your Answer






      StackExchange.ifUsing("editor", function () {
      StackExchange.using("externalEditor", function () {
      StackExchange.using("snippets", function () {
      StackExchange.snippets.init();
      });
      });
      }, "code-snippets");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "1"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53400447%2fhow-to-extract-the-wiki-code-instead-of-html-from-a-page-on-wikipedia-in-c%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      1














      Update without HAP
      As per wimh's comment. Simply appending &action=raw as a query string lets you do the job without scraping.



      using System;
      using System.Net.Http;

      public class Program
      {
      private static HttpClient client = new HttpClient();

      public static void Main()
      {
      var response = client.GetAsync("https://en.wikipedia.org/w/index.php?title=Albatross&action=edit&action=raw").Result;
      var rawEditCode = response.Content.ReadAsStringAsync().Result;

      Console.WriteLine(rawEditCode);
      }
      }


      Fiddle: https://dotnetfiddle.net/NwZC3I



      Original Answer
      You could use HtmlAgilitypack and simply scrape it:



      using System;
      using HtmlAgilityPack;

      public class Program
      {
      public static void Main()
      {
      HtmlWeb web = new HtmlWeb();
      HtmlDocument html = web.Load("https://en.wikipedia.org/w/index.php?title=Albatross&action=edit");

      var editorContent = html.DocumentNode.SelectSingleNode(@"//textarea[@id='wpTextbox1']").InnerHtml;
      Console.WriteLine(editorContent);
      }
      }


      dotNetFiddle: https://dotnetfiddle.net/fmsT1m






      share|improve this answer




























        1














        Update without HAP
        As per wimh's comment. Simply appending &action=raw as a query string lets you do the job without scraping.



        using System;
        using System.Net.Http;

        public class Program
        {
        private static HttpClient client = new HttpClient();

        public static void Main()
        {
        var response = client.GetAsync("https://en.wikipedia.org/w/index.php?title=Albatross&action=edit&action=raw").Result;
        var rawEditCode = response.Content.ReadAsStringAsync().Result;

        Console.WriteLine(rawEditCode);
        }
        }


        Fiddle: https://dotnetfiddle.net/NwZC3I



        Original Answer
        You could use HtmlAgilitypack and simply scrape it:



        using System;
        using HtmlAgilityPack;

        public class Program
        {
        public static void Main()
        {
        HtmlWeb web = new HtmlWeb();
        HtmlDocument html = web.Load("https://en.wikipedia.org/w/index.php?title=Albatross&action=edit");

        var editorContent = html.DocumentNode.SelectSingleNode(@"//textarea[@id='wpTextbox1']").InnerHtml;
        Console.WriteLine(editorContent);
        }
        }


        dotNetFiddle: https://dotnetfiddle.net/fmsT1m






        share|improve this answer


























          1












          1








          1






          Update without HAP
          As per wimh's comment. Simply appending &action=raw as a query string lets you do the job without scraping.



          using System;
          using System.Net.Http;

          public class Program
          {
          private static HttpClient client = new HttpClient();

          public static void Main()
          {
          var response = client.GetAsync("https://en.wikipedia.org/w/index.php?title=Albatross&action=edit&action=raw").Result;
          var rawEditCode = response.Content.ReadAsStringAsync().Result;

          Console.WriteLine(rawEditCode);
          }
          }


          Fiddle: https://dotnetfiddle.net/NwZC3I



          Original Answer
          You could use HtmlAgilitypack and simply scrape it:



          using System;
          using HtmlAgilityPack;

          public class Program
          {
          public static void Main()
          {
          HtmlWeb web = new HtmlWeb();
          HtmlDocument html = web.Load("https://en.wikipedia.org/w/index.php?title=Albatross&action=edit");

          var editorContent = html.DocumentNode.SelectSingleNode(@"//textarea[@id='wpTextbox1']").InnerHtml;
          Console.WriteLine(editorContent);
          }
          }


          dotNetFiddle: https://dotnetfiddle.net/fmsT1m






          share|improve this answer














          Update without HAP
          As per wimh's comment. Simply appending &action=raw as a query string lets you do the job without scraping.



          using System;
          using System.Net.Http;

          public class Program
          {
          private static HttpClient client = new HttpClient();

          public static void Main()
          {
          var response = client.GetAsync("https://en.wikipedia.org/w/index.php?title=Albatross&action=edit&action=raw").Result;
          var rawEditCode = response.Content.ReadAsStringAsync().Result;

          Console.WriteLine(rawEditCode);
          }
          }


          Fiddle: https://dotnetfiddle.net/NwZC3I



          Original Answer
          You could use HtmlAgilitypack and simply scrape it:



          using System;
          using HtmlAgilityPack;

          public class Program
          {
          public static void Main()
          {
          HtmlWeb web = new HtmlWeb();
          HtmlDocument html = web.Load("https://en.wikipedia.org/w/index.php?title=Albatross&action=edit");

          var editorContent = html.DocumentNode.SelectSingleNode(@"//textarea[@id='wpTextbox1']").InnerHtml;
          Console.WriteLine(editorContent);
          }
          }


          dotNetFiddle: https://dotnetfiddle.net/fmsT1m







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Nov 20 '18 at 20:48

























          answered Nov 20 '18 at 20:42









          Marco

          12.5k44276




          12.5k44276

























              0














                 string GetWikiCode(string topic)
              {
              string htmlCode = "";
              string url = "https://en.wikipedia.org/w/index.php?title="
              + topic + "&action=raw";
              Console.WriteLine(String.Format("Downloading: {0}", url));
              using (WebClient client = new WebClient())
              {
              htmlCode = client.DownloadString(url);
              }
              string delimit = new string { "n", "rn" };
              string result = htmlCode.Split(delimit,
              StringSplitOptions.RemoveEmptyEntries);
              return result;
              }





              share|improve this answer


























                0














                   string GetWikiCode(string topic)
                {
                string htmlCode = "";
                string url = "https://en.wikipedia.org/w/index.php?title="
                + topic + "&action=raw";
                Console.WriteLine(String.Format("Downloading: {0}", url));
                using (WebClient client = new WebClient())
                {
                htmlCode = client.DownloadString(url);
                }
                string delimit = new string { "n", "rn" };
                string result = htmlCode.Split(delimit,
                StringSplitOptions.RemoveEmptyEntries);
                return result;
                }





                share|improve this answer
























                  0












                  0








                  0






                     string GetWikiCode(string topic)
                  {
                  string htmlCode = "";
                  string url = "https://en.wikipedia.org/w/index.php?title="
                  + topic + "&action=raw";
                  Console.WriteLine(String.Format("Downloading: {0}", url));
                  using (WebClient client = new WebClient())
                  {
                  htmlCode = client.DownloadString(url);
                  }
                  string delimit = new string { "n", "rn" };
                  string result = htmlCode.Split(delimit,
                  StringSplitOptions.RemoveEmptyEntries);
                  return result;
                  }





                  share|improve this answer












                     string GetWikiCode(string topic)
                  {
                  string htmlCode = "";
                  string url = "https://en.wikipedia.org/w/index.php?title="
                  + topic + "&action=raw";
                  Console.WriteLine(String.Format("Downloading: {0}", url));
                  using (WebClient client = new WebClient())
                  {
                  htmlCode = client.DownloadString(url);
                  }
                  string delimit = new string { "n", "rn" };
                  string result = htmlCode.Split(delimit,
                  StringSplitOptions.RemoveEmptyEntries);
                  return result;
                  }






                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Nov 20 '18 at 22:36









                  Bill Moore

                  1,6751118




                  1,6751118






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Stack Overflow!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.





                      Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                      Please pay close attention to the following guidance:


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53400447%2fhow-to-extract-the-wiki-code-instead-of-html-from-a-page-on-wikipedia-in-c%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Costa Masnaga

                      Fotorealismo

                      Create new schema in PostgreSQL using DBeaver