Multiple entries of PCIe address range in /proc/mem log











up vote
0
down vote

favorite












I use dma_alloc_coherent() in my custom driver to get both virtual and bus addresses.



res->KernelAddress = (u64)dma_alloc_coherent( &DevExt->pdev->dev, size, &res->BusAddress, GFP_ATOMIC );


When printing (%llx) the bus address (res->BusAddress), I got 80009000 as the one.
I checked the log of /proc/iomem to verify the range, but there are multiple entries.



The log of /proc/iomem is shown below:



10000000-10000fff : /pcie-controller@10003000/pci@1,0
10003000-100037ff : pcie-pads
10003800-10003fff : pcie-afi
10004000-10004fff : /pcie-controller@10003000/pci@3,0
40000000-4fffffff : pcie-config-space
50100000-57ffffff : pcie-non-prefetchable
50800000-52ffffff : PCI Bus 0000:01
50800000-5087ffff : 0000:01:00.0
51000000-51ffffff : 0000:01:00.0
52000000-52ffffff : 0000:01:00.0
58000000-7fffffff : pcie-prefetchable
58000000-58ffffff : PCI Bus 0000:01
58000000-58ffffff : 0000:01:00.0
80000000-d82fffff : System RAM
80080000-810fafff : Kernel code
8123f000-814b3fff : Kernel data
d9300000-efffffff : System RAM
f0200000-275ffffff : System RAM
276600000-2767fffff : System RAM



  1. Is 80009000 valid? Which section does it belong to?

  2. Is it necessary to use dma_mmap_coherent() after dma_alloc_coherent() for proper mapping?


Thanks in advance !!










share|improve this question




























    up vote
    0
    down vote

    favorite












    I use dma_alloc_coherent() in my custom driver to get both virtual and bus addresses.



    res->KernelAddress = (u64)dma_alloc_coherent( &DevExt->pdev->dev, size, &res->BusAddress, GFP_ATOMIC );


    When printing (%llx) the bus address (res->BusAddress), I got 80009000 as the one.
    I checked the log of /proc/iomem to verify the range, but there are multiple entries.



    The log of /proc/iomem is shown below:



    10000000-10000fff : /pcie-controller@10003000/pci@1,0
    10003000-100037ff : pcie-pads
    10003800-10003fff : pcie-afi
    10004000-10004fff : /pcie-controller@10003000/pci@3,0
    40000000-4fffffff : pcie-config-space
    50100000-57ffffff : pcie-non-prefetchable
    50800000-52ffffff : PCI Bus 0000:01
    50800000-5087ffff : 0000:01:00.0
    51000000-51ffffff : 0000:01:00.0
    52000000-52ffffff : 0000:01:00.0
    58000000-7fffffff : pcie-prefetchable
    58000000-58ffffff : PCI Bus 0000:01
    58000000-58ffffff : 0000:01:00.0
    80000000-d82fffff : System RAM
    80080000-810fafff : Kernel code
    8123f000-814b3fff : Kernel data
    d9300000-efffffff : System RAM
    f0200000-275ffffff : System RAM
    276600000-2767fffff : System RAM



    1. Is 80009000 valid? Which section does it belong to?

    2. Is it necessary to use dma_mmap_coherent() after dma_alloc_coherent() for proper mapping?


    Thanks in advance !!










    share|improve this question


























      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      I use dma_alloc_coherent() in my custom driver to get both virtual and bus addresses.



      res->KernelAddress = (u64)dma_alloc_coherent( &DevExt->pdev->dev, size, &res->BusAddress, GFP_ATOMIC );


      When printing (%llx) the bus address (res->BusAddress), I got 80009000 as the one.
      I checked the log of /proc/iomem to verify the range, but there are multiple entries.



      The log of /proc/iomem is shown below:



      10000000-10000fff : /pcie-controller@10003000/pci@1,0
      10003000-100037ff : pcie-pads
      10003800-10003fff : pcie-afi
      10004000-10004fff : /pcie-controller@10003000/pci@3,0
      40000000-4fffffff : pcie-config-space
      50100000-57ffffff : pcie-non-prefetchable
      50800000-52ffffff : PCI Bus 0000:01
      50800000-5087ffff : 0000:01:00.0
      51000000-51ffffff : 0000:01:00.0
      52000000-52ffffff : 0000:01:00.0
      58000000-7fffffff : pcie-prefetchable
      58000000-58ffffff : PCI Bus 0000:01
      58000000-58ffffff : 0000:01:00.0
      80000000-d82fffff : System RAM
      80080000-810fafff : Kernel code
      8123f000-814b3fff : Kernel data
      d9300000-efffffff : System RAM
      f0200000-275ffffff : System RAM
      276600000-2767fffff : System RAM



      1. Is 80009000 valid? Which section does it belong to?

      2. Is it necessary to use dma_mmap_coherent() after dma_alloc_coherent() for proper mapping?


      Thanks in advance !!










      share|improve this question















      I use dma_alloc_coherent() in my custom driver to get both virtual and bus addresses.



      res->KernelAddress = (u64)dma_alloc_coherent( &DevExt->pdev->dev, size, &res->BusAddress, GFP_ATOMIC );


      When printing (%llx) the bus address (res->BusAddress), I got 80009000 as the one.
      I checked the log of /proc/iomem to verify the range, but there are multiple entries.



      The log of /proc/iomem is shown below:



      10000000-10000fff : /pcie-controller@10003000/pci@1,0
      10003000-100037ff : pcie-pads
      10003800-10003fff : pcie-afi
      10004000-10004fff : /pcie-controller@10003000/pci@3,0
      40000000-4fffffff : pcie-config-space
      50100000-57ffffff : pcie-non-prefetchable
      50800000-52ffffff : PCI Bus 0000:01
      50800000-5087ffff : 0000:01:00.0
      51000000-51ffffff : 0000:01:00.0
      52000000-52ffffff : 0000:01:00.0
      58000000-7fffffff : pcie-prefetchable
      58000000-58ffffff : PCI Bus 0000:01
      58000000-58ffffff : 0000:01:00.0
      80000000-d82fffff : System RAM
      80080000-810fafff : Kernel code
      8123f000-814b3fff : Kernel data
      d9300000-efffffff : System RAM
      f0200000-275ffffff : System RAM
      276600000-2767fffff : System RAM



      1. Is 80009000 valid? Which section does it belong to?

      2. Is it necessary to use dma_mmap_coherent() after dma_alloc_coherent() for proper mapping?


      Thanks in advance !!







      pci-e






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 19 at 20:10









      Gil Hamilton

      8,7191639




      8,7191639










      asked Nov 19 at 5:05









      PBang

      11




      11
























          1 Answer
          1






          active

          oldest

          votes

















          up vote
          0
          down vote













          From https://www.kernel.org/doc/Documentation/bus-virt-phys-mapping.txt (some of the details of this file are now obsolete but it's the best overview of the issue):




          Essentially, the three ways of addressing memory are (this is "real memory",
          that is, normal RAM--see later about other details):




          • CPU untranslated. This is the "physical" address. Physical address
            0 is what the CPU sees when it drives zeroes on the memory bus.


          • CPU translated address. This is the "virtual" address, and is
            completely internal to the CPU itself with the CPU doing the appropriate
            translations into "CPU untranslated".


          • bus address. This is the address of memory as seen by OTHER devices,
            not the CPU. Now, in theory there could be many different bus
            addresses, with each device seeing memory in some device-specific way, but
            happily most hardware designers aren't actually actively trying to make
            things any more complex than necessary, so you can assume that all
            external hardware sees the memory the same way.



          Now, on normal PCs the bus address is exactly the same as the physical
          address, and things are very simple indeed. However, they are that simple
          because the memory and the devices share the same address space, and that is
          not generally necessarily true on other PCI/ISA setups.




          The bottom line is that the answer to your question is architecture-dependent.



          In your /proc/iomem snippet, note that that that listing is nested. The 80009000 address appears to fall into two sections because one of those sections is a subset of the other. If that address was a physical memory address, then yes, it would be a "kernel code address", which would be a strange thing to get back from dma_alloc_coherent. That leads me to believe that a physical address is not the same as a bus address on your architecture.



          dma_alloc_coherent also maps the memory in kernel virtual address space so you shouldn't need to do anything else to access it from your code. (dma_mmap_coherent is used to map the memory into user virtual address space.)






          share|improve this answer





















          • Isnt dma_alloc_coherent () give bus addr and virtual addr?Our custom application requires the physical addr.
            – PBang
            Nov 20 at 8:09










          • How would your application use the physical address? The only way you can use a physical address is to create a virtual address that maps to it. That is already done for you (for kernel space) by dma_alloc_coherent and can be done for user-space with dma_mmap_coherent.
            – Gil Hamilton
            Nov 20 at 16:21










          • Physical address is used by remap_pfn_range () for mapping. With SMMU disabled, there was no issue in getting the physical address with virt_to_phys (). But now that SMMU is ON, an alternative to calculate the physical address must be found. You mean to say that dma_alloc_coherent and dma_mmap_coherent are the same but the scope is different? But how to map the virtual address returned by dma_mmap_coherent to get the physical address?
            – PBang
            Nov 21 at 5:19










          • After calling dma_alloc_coherent, you should be able to pass its return values to dma_mmap_coherent in order to map the memory to user space. The latter function ultimately calls remap_pfn_range for you. See include/linux/dma-mapping.h for the source and follow code from there. (Helps to learn to use tags: run make tags in kernel source hierarchy, then use your editor to chase tags.)
            – Gil Hamilton
            Nov 21 at 18:56












          • Vendor provided driver and application flow is something like this: 1. dma_alloc_coherent, 2. virt_to_phys 3. mmap (phys_add, kern_addr) 4. remap_pfn_range(). I tried using dma_mmap_coherent() in the place of remap_pfn_range(), but resulted in page fault error. Does vma struct hold dma_addr_t info? I wonder if intermediate mmap (phys_addr, kern_addr) creates issues. This flow works when there is no SMMU. Also, custom driver has multiple intermediate files which are not well organized. Is streaming DMA better than coherent DMA where virt_to_phys() can be used for getting the physical addr.
            – PBang
            Nov 22 at 6:45











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53368609%2fmultiple-entries-of-pcie-address-range-in-proc-mem-log%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          0
          down vote













          From https://www.kernel.org/doc/Documentation/bus-virt-phys-mapping.txt (some of the details of this file are now obsolete but it's the best overview of the issue):




          Essentially, the three ways of addressing memory are (this is "real memory",
          that is, normal RAM--see later about other details):




          • CPU untranslated. This is the "physical" address. Physical address
            0 is what the CPU sees when it drives zeroes on the memory bus.


          • CPU translated address. This is the "virtual" address, and is
            completely internal to the CPU itself with the CPU doing the appropriate
            translations into "CPU untranslated".


          • bus address. This is the address of memory as seen by OTHER devices,
            not the CPU. Now, in theory there could be many different bus
            addresses, with each device seeing memory in some device-specific way, but
            happily most hardware designers aren't actually actively trying to make
            things any more complex than necessary, so you can assume that all
            external hardware sees the memory the same way.



          Now, on normal PCs the bus address is exactly the same as the physical
          address, and things are very simple indeed. However, they are that simple
          because the memory and the devices share the same address space, and that is
          not generally necessarily true on other PCI/ISA setups.




          The bottom line is that the answer to your question is architecture-dependent.



          In your /proc/iomem snippet, note that that that listing is nested. The 80009000 address appears to fall into two sections because one of those sections is a subset of the other. If that address was a physical memory address, then yes, it would be a "kernel code address", which would be a strange thing to get back from dma_alloc_coherent. That leads me to believe that a physical address is not the same as a bus address on your architecture.



          dma_alloc_coherent also maps the memory in kernel virtual address space so you shouldn't need to do anything else to access it from your code. (dma_mmap_coherent is used to map the memory into user virtual address space.)






          share|improve this answer





















          • Isnt dma_alloc_coherent () give bus addr and virtual addr?Our custom application requires the physical addr.
            – PBang
            Nov 20 at 8:09










          • How would your application use the physical address? The only way you can use a physical address is to create a virtual address that maps to it. That is already done for you (for kernel space) by dma_alloc_coherent and can be done for user-space with dma_mmap_coherent.
            – Gil Hamilton
            Nov 20 at 16:21










          • Physical address is used by remap_pfn_range () for mapping. With SMMU disabled, there was no issue in getting the physical address with virt_to_phys (). But now that SMMU is ON, an alternative to calculate the physical address must be found. You mean to say that dma_alloc_coherent and dma_mmap_coherent are the same but the scope is different? But how to map the virtual address returned by dma_mmap_coherent to get the physical address?
            – PBang
            Nov 21 at 5:19










          • After calling dma_alloc_coherent, you should be able to pass its return values to dma_mmap_coherent in order to map the memory to user space. The latter function ultimately calls remap_pfn_range for you. See include/linux/dma-mapping.h for the source and follow code from there. (Helps to learn to use tags: run make tags in kernel source hierarchy, then use your editor to chase tags.)
            – Gil Hamilton
            Nov 21 at 18:56












          • Vendor provided driver and application flow is something like this: 1. dma_alloc_coherent, 2. virt_to_phys 3. mmap (phys_add, kern_addr) 4. remap_pfn_range(). I tried using dma_mmap_coherent() in the place of remap_pfn_range(), but resulted in page fault error. Does vma struct hold dma_addr_t info? I wonder if intermediate mmap (phys_addr, kern_addr) creates issues. This flow works when there is no SMMU. Also, custom driver has multiple intermediate files which are not well organized. Is streaming DMA better than coherent DMA where virt_to_phys() can be used for getting the physical addr.
            – PBang
            Nov 22 at 6:45















          up vote
          0
          down vote













          From https://www.kernel.org/doc/Documentation/bus-virt-phys-mapping.txt (some of the details of this file are now obsolete but it's the best overview of the issue):




          Essentially, the three ways of addressing memory are (this is "real memory",
          that is, normal RAM--see later about other details):




          • CPU untranslated. This is the "physical" address. Physical address
            0 is what the CPU sees when it drives zeroes on the memory bus.


          • CPU translated address. This is the "virtual" address, and is
            completely internal to the CPU itself with the CPU doing the appropriate
            translations into "CPU untranslated".


          • bus address. This is the address of memory as seen by OTHER devices,
            not the CPU. Now, in theory there could be many different bus
            addresses, with each device seeing memory in some device-specific way, but
            happily most hardware designers aren't actually actively trying to make
            things any more complex than necessary, so you can assume that all
            external hardware sees the memory the same way.



          Now, on normal PCs the bus address is exactly the same as the physical
          address, and things are very simple indeed. However, they are that simple
          because the memory and the devices share the same address space, and that is
          not generally necessarily true on other PCI/ISA setups.




          The bottom line is that the answer to your question is architecture-dependent.



          In your /proc/iomem snippet, note that that that listing is nested. The 80009000 address appears to fall into two sections because one of those sections is a subset of the other. If that address was a physical memory address, then yes, it would be a "kernel code address", which would be a strange thing to get back from dma_alloc_coherent. That leads me to believe that a physical address is not the same as a bus address on your architecture.



          dma_alloc_coherent also maps the memory in kernel virtual address space so you shouldn't need to do anything else to access it from your code. (dma_mmap_coherent is used to map the memory into user virtual address space.)






          share|improve this answer





















          • Isnt dma_alloc_coherent () give bus addr and virtual addr?Our custom application requires the physical addr.
            – PBang
            Nov 20 at 8:09










          • How would your application use the physical address? The only way you can use a physical address is to create a virtual address that maps to it. That is already done for you (for kernel space) by dma_alloc_coherent and can be done for user-space with dma_mmap_coherent.
            – Gil Hamilton
            Nov 20 at 16:21










          • Physical address is used by remap_pfn_range () for mapping. With SMMU disabled, there was no issue in getting the physical address with virt_to_phys (). But now that SMMU is ON, an alternative to calculate the physical address must be found. You mean to say that dma_alloc_coherent and dma_mmap_coherent are the same but the scope is different? But how to map the virtual address returned by dma_mmap_coherent to get the physical address?
            – PBang
            Nov 21 at 5:19










          • After calling dma_alloc_coherent, you should be able to pass its return values to dma_mmap_coherent in order to map the memory to user space. The latter function ultimately calls remap_pfn_range for you. See include/linux/dma-mapping.h for the source and follow code from there. (Helps to learn to use tags: run make tags in kernel source hierarchy, then use your editor to chase tags.)
            – Gil Hamilton
            Nov 21 at 18:56












          • Vendor provided driver and application flow is something like this: 1. dma_alloc_coherent, 2. virt_to_phys 3. mmap (phys_add, kern_addr) 4. remap_pfn_range(). I tried using dma_mmap_coherent() in the place of remap_pfn_range(), but resulted in page fault error. Does vma struct hold dma_addr_t info? I wonder if intermediate mmap (phys_addr, kern_addr) creates issues. This flow works when there is no SMMU. Also, custom driver has multiple intermediate files which are not well organized. Is streaming DMA better than coherent DMA where virt_to_phys() can be used for getting the physical addr.
            – PBang
            Nov 22 at 6:45













          up vote
          0
          down vote










          up vote
          0
          down vote









          From https://www.kernel.org/doc/Documentation/bus-virt-phys-mapping.txt (some of the details of this file are now obsolete but it's the best overview of the issue):




          Essentially, the three ways of addressing memory are (this is "real memory",
          that is, normal RAM--see later about other details):




          • CPU untranslated. This is the "physical" address. Physical address
            0 is what the CPU sees when it drives zeroes on the memory bus.


          • CPU translated address. This is the "virtual" address, and is
            completely internal to the CPU itself with the CPU doing the appropriate
            translations into "CPU untranslated".


          • bus address. This is the address of memory as seen by OTHER devices,
            not the CPU. Now, in theory there could be many different bus
            addresses, with each device seeing memory in some device-specific way, but
            happily most hardware designers aren't actually actively trying to make
            things any more complex than necessary, so you can assume that all
            external hardware sees the memory the same way.



          Now, on normal PCs the bus address is exactly the same as the physical
          address, and things are very simple indeed. However, they are that simple
          because the memory and the devices share the same address space, and that is
          not generally necessarily true on other PCI/ISA setups.




          The bottom line is that the answer to your question is architecture-dependent.



          In your /proc/iomem snippet, note that that that listing is nested. The 80009000 address appears to fall into two sections because one of those sections is a subset of the other. If that address was a physical memory address, then yes, it would be a "kernel code address", which would be a strange thing to get back from dma_alloc_coherent. That leads me to believe that a physical address is not the same as a bus address on your architecture.



          dma_alloc_coherent also maps the memory in kernel virtual address space so you shouldn't need to do anything else to access it from your code. (dma_mmap_coherent is used to map the memory into user virtual address space.)






          share|improve this answer












          From https://www.kernel.org/doc/Documentation/bus-virt-phys-mapping.txt (some of the details of this file are now obsolete but it's the best overview of the issue):




          Essentially, the three ways of addressing memory are (this is "real memory",
          that is, normal RAM--see later about other details):




          • CPU untranslated. This is the "physical" address. Physical address
            0 is what the CPU sees when it drives zeroes on the memory bus.


          • CPU translated address. This is the "virtual" address, and is
            completely internal to the CPU itself with the CPU doing the appropriate
            translations into "CPU untranslated".


          • bus address. This is the address of memory as seen by OTHER devices,
            not the CPU. Now, in theory there could be many different bus
            addresses, with each device seeing memory in some device-specific way, but
            happily most hardware designers aren't actually actively trying to make
            things any more complex than necessary, so you can assume that all
            external hardware sees the memory the same way.



          Now, on normal PCs the bus address is exactly the same as the physical
          address, and things are very simple indeed. However, they are that simple
          because the memory and the devices share the same address space, and that is
          not generally necessarily true on other PCI/ISA setups.




          The bottom line is that the answer to your question is architecture-dependent.



          In your /proc/iomem snippet, note that that that listing is nested. The 80009000 address appears to fall into two sections because one of those sections is a subset of the other. If that address was a physical memory address, then yes, it would be a "kernel code address", which would be a strange thing to get back from dma_alloc_coherent. That leads me to believe that a physical address is not the same as a bus address on your architecture.



          dma_alloc_coherent also maps the memory in kernel virtual address space so you shouldn't need to do anything else to access it from your code. (dma_mmap_coherent is used to map the memory into user virtual address space.)







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 19 at 20:51









          Gil Hamilton

          8,7191639




          8,7191639












          • Isnt dma_alloc_coherent () give bus addr and virtual addr?Our custom application requires the physical addr.
            – PBang
            Nov 20 at 8:09










          • How would your application use the physical address? The only way you can use a physical address is to create a virtual address that maps to it. That is already done for you (for kernel space) by dma_alloc_coherent and can be done for user-space with dma_mmap_coherent.
            – Gil Hamilton
            Nov 20 at 16:21










          • Physical address is used by remap_pfn_range () for mapping. With SMMU disabled, there was no issue in getting the physical address with virt_to_phys (). But now that SMMU is ON, an alternative to calculate the physical address must be found. You mean to say that dma_alloc_coherent and dma_mmap_coherent are the same but the scope is different? But how to map the virtual address returned by dma_mmap_coherent to get the physical address?
            – PBang
            Nov 21 at 5:19










          • After calling dma_alloc_coherent, you should be able to pass its return values to dma_mmap_coherent in order to map the memory to user space. The latter function ultimately calls remap_pfn_range for you. See include/linux/dma-mapping.h for the source and follow code from there. (Helps to learn to use tags: run make tags in kernel source hierarchy, then use your editor to chase tags.)
            – Gil Hamilton
            Nov 21 at 18:56












          • Vendor provided driver and application flow is something like this: 1. dma_alloc_coherent, 2. virt_to_phys 3. mmap (phys_add, kern_addr) 4. remap_pfn_range(). I tried using dma_mmap_coherent() in the place of remap_pfn_range(), but resulted in page fault error. Does vma struct hold dma_addr_t info? I wonder if intermediate mmap (phys_addr, kern_addr) creates issues. This flow works when there is no SMMU. Also, custom driver has multiple intermediate files which are not well organized. Is streaming DMA better than coherent DMA where virt_to_phys() can be used for getting the physical addr.
            – PBang
            Nov 22 at 6:45


















          • Isnt dma_alloc_coherent () give bus addr and virtual addr?Our custom application requires the physical addr.
            – PBang
            Nov 20 at 8:09










          • How would your application use the physical address? The only way you can use a physical address is to create a virtual address that maps to it. That is already done for you (for kernel space) by dma_alloc_coherent and can be done for user-space with dma_mmap_coherent.
            – Gil Hamilton
            Nov 20 at 16:21










          • Physical address is used by remap_pfn_range () for mapping. With SMMU disabled, there was no issue in getting the physical address with virt_to_phys (). But now that SMMU is ON, an alternative to calculate the physical address must be found. You mean to say that dma_alloc_coherent and dma_mmap_coherent are the same but the scope is different? But how to map the virtual address returned by dma_mmap_coherent to get the physical address?
            – PBang
            Nov 21 at 5:19










          • After calling dma_alloc_coherent, you should be able to pass its return values to dma_mmap_coherent in order to map the memory to user space. The latter function ultimately calls remap_pfn_range for you. See include/linux/dma-mapping.h for the source and follow code from there. (Helps to learn to use tags: run make tags in kernel source hierarchy, then use your editor to chase tags.)
            – Gil Hamilton
            Nov 21 at 18:56












          • Vendor provided driver and application flow is something like this: 1. dma_alloc_coherent, 2. virt_to_phys 3. mmap (phys_add, kern_addr) 4. remap_pfn_range(). I tried using dma_mmap_coherent() in the place of remap_pfn_range(), but resulted in page fault error. Does vma struct hold dma_addr_t info? I wonder if intermediate mmap (phys_addr, kern_addr) creates issues. This flow works when there is no SMMU. Also, custom driver has multiple intermediate files which are not well organized. Is streaming DMA better than coherent DMA where virt_to_phys() can be used for getting the physical addr.
            – PBang
            Nov 22 at 6:45
















          Isnt dma_alloc_coherent () give bus addr and virtual addr?Our custom application requires the physical addr.
          – PBang
          Nov 20 at 8:09




          Isnt dma_alloc_coherent () give bus addr and virtual addr?Our custom application requires the physical addr.
          – PBang
          Nov 20 at 8:09












          How would your application use the physical address? The only way you can use a physical address is to create a virtual address that maps to it. That is already done for you (for kernel space) by dma_alloc_coherent and can be done for user-space with dma_mmap_coherent.
          – Gil Hamilton
          Nov 20 at 16:21




          How would your application use the physical address? The only way you can use a physical address is to create a virtual address that maps to it. That is already done for you (for kernel space) by dma_alloc_coherent and can be done for user-space with dma_mmap_coherent.
          – Gil Hamilton
          Nov 20 at 16:21












          Physical address is used by remap_pfn_range () for mapping. With SMMU disabled, there was no issue in getting the physical address with virt_to_phys (). But now that SMMU is ON, an alternative to calculate the physical address must be found. You mean to say that dma_alloc_coherent and dma_mmap_coherent are the same but the scope is different? But how to map the virtual address returned by dma_mmap_coherent to get the physical address?
          – PBang
          Nov 21 at 5:19




          Physical address is used by remap_pfn_range () for mapping. With SMMU disabled, there was no issue in getting the physical address with virt_to_phys (). But now that SMMU is ON, an alternative to calculate the physical address must be found. You mean to say that dma_alloc_coherent and dma_mmap_coherent are the same but the scope is different? But how to map the virtual address returned by dma_mmap_coherent to get the physical address?
          – PBang
          Nov 21 at 5:19












          After calling dma_alloc_coherent, you should be able to pass its return values to dma_mmap_coherent in order to map the memory to user space. The latter function ultimately calls remap_pfn_range for you. See include/linux/dma-mapping.h for the source and follow code from there. (Helps to learn to use tags: run make tags in kernel source hierarchy, then use your editor to chase tags.)
          – Gil Hamilton
          Nov 21 at 18:56






          After calling dma_alloc_coherent, you should be able to pass its return values to dma_mmap_coherent in order to map the memory to user space. The latter function ultimately calls remap_pfn_range for you. See include/linux/dma-mapping.h for the source and follow code from there. (Helps to learn to use tags: run make tags in kernel source hierarchy, then use your editor to chase tags.)
          – Gil Hamilton
          Nov 21 at 18:56














          Vendor provided driver and application flow is something like this: 1. dma_alloc_coherent, 2. virt_to_phys 3. mmap (phys_add, kern_addr) 4. remap_pfn_range(). I tried using dma_mmap_coherent() in the place of remap_pfn_range(), but resulted in page fault error. Does vma struct hold dma_addr_t info? I wonder if intermediate mmap (phys_addr, kern_addr) creates issues. This flow works when there is no SMMU. Also, custom driver has multiple intermediate files which are not well organized. Is streaming DMA better than coherent DMA where virt_to_phys() can be used for getting the physical addr.
          – PBang
          Nov 22 at 6:45




          Vendor provided driver and application flow is something like this: 1. dma_alloc_coherent, 2. virt_to_phys 3. mmap (phys_add, kern_addr) 4. remap_pfn_range(). I tried using dma_mmap_coherent() in the place of remap_pfn_range(), but resulted in page fault error. Does vma struct hold dma_addr_t info? I wonder if intermediate mmap (phys_addr, kern_addr) creates issues. This flow works when there is no SMMU. Also, custom driver has multiple intermediate files which are not well organized. Is streaming DMA better than coherent DMA where virt_to_phys() can be used for getting the physical addr.
          – PBang
          Nov 22 at 6:45


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53368609%2fmultiple-entries-of-pcie-address-range-in-proc-mem-log%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Costa Masnaga

          Fotorealismo

          Sidney Franklin