Physical and Non-Physical Receipts

Dear All,
Because of SOX purposes my client would like to have a clear demarcation between physical and non-physical receipt.
For example purchasing for stock is a physical receipt as warehouse employee can see it himself. And he is responsible to take care of it.
But say for example a third party delivery is non-physical, as in this case vendor ships the goods directly to customer, and warehouse never sees it physically and hence they are not responsible for the same.
To take care of this situation, we designed our process in following manner:
1. Physical: PO > GR with MIGO, By doing a GR with MIGO we consider this as physical in our set-up.
2. Non-Physical: PO > Inbound Delivery > VLO6IG (Automatic GR), by doing GR via VL06IG we determine it as non-physical.
When there is any deviation from above process, we consider that as SOX violation.
Now, the point is that inbound delivery is standard SAP process, which can very well be used for receiving stock item as well. But, we have put this artificial barrier ourselves.
Does SAP have a standard way of segregating the physical and non-physical? Is there a better way of handling this situation?
Thanks and Regards,
Prakash

Hi Jürgen,
You are right that third party PO does not require a goods receipt. But to ensure that we have a 3 way match "PO GR IR'' we ensure that all the POs, except blanket PO, always have GR. The GR in case of third party delivery is done based on shipping notification we receive from the vendor.
Thanks & Regards,
Prakash

Similar Messages

  • Moscow Institute of Physics and Technology (aka Phystech) wants to be content provider for iTunes U. What do we need to do?

    We are in TOP-100 world universities in physics, and right now can provide complete physics coursers for bachelor degree.

    Why is it so?
    All we here, who again are just fellow users, can say is "because Apple does not offer iTunes U in your country".
    Who decides on the inclusion of the country in the list of permitted?
    Apple. Who at Apple specifically makes those decisions or what their parameters are for deciding, again, none of us here will know.
    Regards.

  • How to Install Physical and Virtual Host

    I am getting licensing issues after I installed Essentials 2012R2 on a physical machine and then used the same license to virtualize it.  It's saying my physical machine needs to hold all the FSMO roles in which my virtual machine is hosting those roles.
    I read that Microsoft allows you to use the Essentials license for the physical and virtual server.  Is that correct? 
    Is there a specific service I need to remove on the physical machine in order for these errors to stop?  I still have the Windows Server Essentials Experience service installed. Is that what needs to be removed?
    Thanks,
    Doug

    Hi Doug,
    There is an article that provide details of licensing for Windows Server 2012 R2 Essentials. Please refer to and check if can help us to understand licensing for Windows Server 2012 R2 Essentials better.
    Understanding Licensing for Windows Server 2012 R2 Essentials and the Windows
    Server Essentials Experience role
    Please also refer to following article and check if can help you.
    Customize Deployment - Windows Server Essentials
    If any update, please feel free to let me know.
    Hope this helps.
    Best regards,
    Justin Gu
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • One primary and two physical standby database creation

    hi
    i want to create two standby (physical) and one primary dataguard.
    so please tell me what is sequence for db_file_name_convert and db_file_name_convert.
    i mean how i will give database name or path in this parameter.

    Hello;
    One option is Data Guard Cascading Standby.
    Here are my complete test notes if it help :
    http://www.visi.com/~mseberg/Data_Guard_Cascading_Standby_Setup_and_Test.pdf
    Best Regards
    mseberg

  • I am using my Description as the caption on Slideshow.  I ticked the "Show Title Slide" button and iPhoto physically copied the description on the first slide onto the end of every other description.  I can find no way of removing them other than by hand.

    I am using my Description as the caption on Slideshow.  I ticked the "Show Title Slide" button and iPhoto physically copied the description on the first slide onto the end of every other description.  I can find no way of removing them other than by hand.  Unticking the "show title slide" did not reverse the situation back to my required state.   Any ideas why it might have happened or how it might br resolved?   Regards, Marshfrog1

    Attached is Dennis Linam’s Audition – “Log File” and “Log – Last File”
    Contact information Dennis [email protected]
    Previous contact information with your organization (DURIM):
    Dennis - i just finished my audition trial and bought the subscription the 2014 version.
    created by durin in Audition CS5.5, CS6 & CC - View the full discussion 
    DURIM - Okay.  I would expect the "Cache Warning" message because your default directories would not be the same as the ones in the settings file I generated.
    If you go back to the "7.0" directory and open the "Logs" folder, can you copy the "Audition Log.txt" file and send it as an attachment to [email protected]?  We'll take a look in that logfile and see if it gives us more information about why this is failing now.
    Also, do you have any other Adobe applications installed on this machine, such as Premiere Pro?  If so, do they launch as expected or fail as well?
    I do have the trial Pro version of Adobe reader, but I have not activated it, because I fear the same thing will happen did it. I cannot afford to activate the subscription for that product and take the chance of it not working either. I depend on those two programs religiously. Here is the files that you requested. I appreciate any help you can give me to get this audition program started
    Audition Log- file
    Ticks = 16       C:\Program Files (x86)\Common Files\Adobe\dynamiclink\7.0\dynamiclinkmanager.exe
    Sent from Windows Mail

  • I am trying to stop encryption in fire vault, but I keep getting the message, The target disk isn't eligible for reversion because it wasn't created by conversion or it is not part of a simple setup of exactly one logical and one physical volume.¨

    I am trying to cease encryption in fire vault, Mac os x lion 10.7 4
    I keep getting the message,
    ¨The target disk isn’t eligible for reversion because it wasn’t created by conversion or it is not part of a simple setup of exactly one logical and one physical volume.¨
    Please can someone advise the way to disable?
    Thank you.

    Are you using Boot Camp? See this discussion at MacRumors.
    Clinton

  • Why logical column in terms of other logical columns and not physical sourc

    Hello
    Can someone shed some light that in what scenario ..defining a logical column in terms of other logical column is beneficial compared to defining logical column in terms of physical sources?
    I found something on google that said defining in terms of logical columns is like one time thing. I dont understand, even if you define in terms of physical source that' too one time.
    In both the case we build an expression specifying which logical or physical columns.
    Thank you

    well logical columns from physical source, if in logical fact table you can set aggr rules. Logical columns based on other logical columns you can not. Logical columns based on other logical columns will inherit the aggr rules. So for instance, if you create col1 and col2. Both based off physical and both having sum aggr rule. If you create logical column col3 based off these two as such "col1"/"col2" it will produce the following sum(col1)/cum(col2). Now if you really wanted sum(col1/col2) then you should have made col3 based off physical as col1/col2 with aggr rule of sum.
    Also of some note, all logical columns based off physical will be performed in the inner most query. So for my example above to get col3 where it is logical col1/logical col2 or sum(col1)/sum(col2) the physical query will be:
    select D1.c1/D1.c2
    from (
    select sum(col1) as c1,
    sum(col2) as c2
    from tableA
    ) D1
    So my short summary of this, is what or how do you want your sql to be created. Based on this can you lead you to your answer of whether to use logical col based on physical or other logical columns. And unltimately, check the sql generated from your work to ensure the sql queries generated are what you want and expect.
    I hope this helps!

  • Mapping of Web App context root and the physical directory of the web app

    I'm running Weblogic 7.0 on Windows2000.The physical directory of my web application
    is D:\WL8\weblogic81\TestDeploy\build\TestWebApp and under these directory I have
    my JSPS, static HTML and WEB-INF. I define the context path of this web app in
    the weblogic.xml ;-
    <weblogic-web-app>
         <context-root>/testapp</context-root>
    </weblogic-web-app>
    As a result of deploying this web app in the server (or it may be created manually
    also), the following entry gets inserted in the server's config.xml ,-
    <Application Deployed="true" Name="TestWebApp"
    Path="D:\WL8\weblogic81\TestDeploy\build" TwoPhase="true">
    <WebAppComponent Name="TestWebApp" Targets="myserver" URI="TestWebApp"/>
    </Application>
    Now, whenever I make a request of the form "http://localhost:7001/testapp/..",
    it's properly executing my web app. My question is, how does the container knows
    that for any request for the web app with context path as 'testapp', it has to
    server files from D:\WL8\weblogic81\TestDeploy\build\TestWebApp. In the above
    process, nowhere such mapping is specified. I expected something like Tomcat's
    server.xml, where in docbase we clearly specify this mapping between the context
    path and the physical directory. Please help.

    Let me give some more details and hopefully this will make things clearer.
    Say you deploy /foo/bar/myweb.war and in myweb.war you configure a
    context-root of /rob
    During deployment, the server creates an ApplicationMBean with a path of
    /foo/bar/. It then creates a WebAppComponent with a uri of myweb.war.
    Next, deployment calls back on the web container and tells it to deploy
    the WebAppComponent. The web container reads the myweb.war, parses
    descriptors etc. The web container then updates its data structures to
    register that myweb.war has a context path of /rob. (It has to figure
    out all the other servlet mappings as well.)
    When a request for /rob/foo comes in, the web container consults its
    data structures to determine which webapp and servlet receives the
    request. This is not a linear search of all webapps and servlets.
    There's much better ways to do pattern matching.
    Hope this clears things up. Let me know if you still have questions.
    -- Rob
    Arindam Chandra wrote:
    Thanks for the answer. Still one thing is not clear. Whatever context path I declare
    for my web app as the value of <context-root> element in the weblogic.xml (in
    my example it's "/testapp"), it is no where mapped with the "URI" attribute (or
    any other attribute, sub-element whatsoever in the <Application> element).
    Application Deployed="true" Name="TestWebApp"
    Path="D:\WL8\weblogic81\TestDeploy\build" TwoPhase="true">
    <WebAppComponent Name="TestWebApp" Targets="myserver" URI="TestWebApp"/>
    </Application>
    So when a request of the form http://myweblogic.com:7001/testapp/... arrives at
    the server, how does the server knows that it has to serve this request with files
    from D:\WL8\weblogic81\TestDeploy\build\TestWebApp ? It should not be like the
    web container iterates thru all the web application entries in config.xml and
    tries to match with one context-root declaration. I repeat, I expected some mapping
    similar to Tomcat's server.xml, where in the <docbase> element u clearly specify
    the mapping between the context path and the physical directory
    Rob Woollen <[email protected]> wrote:
    Arindam Chandra wrote:
    I'm running Weblogic 7.0 on Windows2000.The physical directory of myweb application
    is D:\WL8\weblogic81\TestDeploy\build\TestWebApp and under these directoryI have
    my JSPS, static HTML and WEB-INF. I define the context path of thisweb app in
    the weblogic.xml ;-
    <weblogic-web-app>
         <context-root>/testapp</context-root>
    </weblogic-web-app>
    As a result of deploying this web app in the server (or it may be createdmanually
    also), the following entry gets inserted in the server's config.xml,-
    <>So the server will look for your web application at the Application Path
    (D:\WL8\weblogic81\TestDeploy\build|) + the web uri (TestWebApp). So
    it
    maps the context-root you've specified /testapp to that path.
    It's a little clearer in the case where you had a full-fledged EAR.
    Then you'r application path would map to the "root" of the EAR, and the
    uris would point to the various modules (eg webapps.)
    -- Rob
    Now, whenever I make a request of the form "http://localhost:7001/testapp/..",
    it's properly executing my web app. My question is, how does the containerknows
    that for any request for the web app with context path as 'testapp',it has to
    server files from D:\WL8\weblogic81\TestDeploy\build\TestWebApp. Inthe above
    process, nowhere such mapping is specified. I expected something likeTomcat's
    server.xml, where in docbase we clearly specify this mapping betweenthe context
    path and the physical directory. Please help.

  • Separation of the physical and logical structures

    Hi,
    I am very new to Oracle database administration. While reading Sam Alapati's book "Expert Oracle9i Database Administration," I came across the concept of the separation of an Oracle database’s physical storage structures from its logical storage structures. In particular, Sam states the following in his book:
    “This logical defining of Oracle's database structure has another fundamental motive behind it. By organizing space into logical structures and assigning these logical entities to users of the database, Oracle databases achieve the logical separation of users (owners of the database objects, such as tables) of the database from the physical manifestations of the database in terms of data files and so forth.”
    I am not quite convinced about the value this separation of the physical and logical really adds to the task of database administration. Considering the way DBASE worked, i.e. each table used to be stored as a separate file, what would be lost if Oracle’s implementation were similar and each table (i.e. file) were to be assigned to a particular user. I am not sure of the value added by storing the data from more than one table in more than one file, effectively resulting in a many-to-many relationship between tables and files. Please enlighten me. I would really appreciate it.
    Karim

    and each table were to be assigned to a particular user. Don't know what you mean. In Oracle, every table has one and only one owner.
    I am not sure of the value added by storing the data from more than one table in more than one fileIf an application has a thousand tables, would you rather manage 1000 files or 1?
    In general, separating the physical from the logical allows the physical structure to change without affecting the logical (in theory at least). Even a table is a logical structure. We think of rows and columns, but it isn't stored the way we think of it. When we do a select statement, we don't have to write code to read each block, extract the contents, etc.
    With partitioned tables, it is sometimes a good idea to split up partitions in such a way to get a performance gain. Like placing the most recent (and most queried) month of data on the fastest storage device. If you stuffed everything in to one gigantic file, you would lose that ability.
    If you want to store each table as a separate file, you can do that with Oracle. For each new table, create a new tablespace, and then create a new file for the tablespace. Then come back to this forum in a year and tell us how it's going.

  • How to collect physical and logical disk counters using query?

    Hai friends, i want to view physical and logical disk counters in sql server like Avg. Disk sec/Read, Avg.
    Disk Bytes/Read, Avg. Disk sec/Write, Avg.
    Disk Bytes/Write, etc.,  Can anyone tell me how to vies this by using query?
    Thanks in advance..

    Hai friends, i want to view physical and logical disk counters in sql server like Avg. Disk sec/Read, Avg.
    Disk Bytes/Read, Avg. Disk sec/Write, Avg.
    Disk Bytes/Write, etc.,  Can anyone tell me how to vies this by using query?
    Thanks in advance..
    Hello,
    Sys.dm_os_performance counter will only show counters related to SQL server not Physical disk if you run below query in SQL server it will not return any value so no disk counter is present.You will have to see it using perfmon.
    select * from sys.dm_os_performance_counters where counter_name like '%disk%'
    This can also be done through power shell but I dont have experience with that.You can search net for power shell query to see Windows perfmon counters
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • OBIEE -10.1.3.4.1 - high physical and logical query response

    Hi All,
    I am facing an performance issue in OBIEE 10g .My report is taking 2 mins to come up and when i fired the physical query in the db the data is coming in 2 secs.
    Below is the details from the log file.Here I observed that response time for physical and logical query is 109 sec ~ 2 mins.Please provide me the helpful pointers.
    +++Administrator:370000:370015:----2013/01/22 07:28:04
    -------------------- Execution Node: <<2650466>>, Close Row Count = 3332, Row Width = 26000 bytes
    +++Administrator:370000:370015:----2013/01/22 07:28:04
    -------------------- Execution Node: <<2650466>> DbGateway Exchange, Close Row Count = 3332, Row Width = 26000 bytes
    +++Administrator:370000:370015:----2013/01/22 07:28:04
    -------------------- Execution Node: <<2650466>> DbGateway Exchange, Close Row Count = 3332, Row Width = 26000 bytes
    +++Administrator:370000:370015:----2013/01/22 07:28:04
    -------------------- Query Status: Successful Completion
    +++Administrator:370000:370015:----2013/01/22 07:28:05
    -------------------- Rows 3332, bytes 86632000 retrieved from database query id: <<2650466>>
    +++Administrator:370000:370015:----2013/01/22 07:28:05
    -------------------- Physical query response time 109 (seconds), id <<2650466>>
    +++Administrator:370000:370015:----2013/01/22 07:28:05
    -------------------- Physical Query Summary Stats: Number of physical queries 1, Cumulative time 109, DB-connect time 0 (seconds)
    +++Administrator:370000:370015:----2013/01/22 07:28:05
    -------------------- Rows returned to Client 3332
    +++Administrator:370000:370015:----2013/01/22 07:28:05
    -------------------- Logical Query Summary Stats: Elapsed time 109, Response time 109, Compilation time 0 (seconds)

    Did you run the SQL from a client on the OBIEE server or your local machine? Does the Physical SQL on the OBIEE server against the DB run in 2 Seconds and when sent by the OBIEE server it takes 109 seconds?? Is that correct?

  • Export physical and logical details on ASA 5520 and 8.0 software

    Hello...does anybody know if there is any way to export details of the physical and logical interface details (including interface descriptions) to Excel, PDF or and other format from the command line or ASDM? 
    Thanks,
    John

    Export directly in xls, xlsx or pdf - no.
    The output of "show run interface" or "show interface" is pretty structured however and easily parsed by Excel - either manually or via a macro. See output below (you can omit the interface identifier to get all interfaces. I used one for brevity.)
    One can build a script to log in, perform an arbitrary command logging the output to a file which can then be massaged to extract the information you want in a suitable format (csv, etc.). Once in Excel it can be saved as pdf if you're so inclined.
    Of couse, some of the full-featured network management tools do a lot of this (and lots more) if you have them.
    ASA-1# sh run int eth0/0
    interface Ethernet0/0
    nameif outside
    security-level 0
    ip address x.x.x.x 255.255.255.224
    ASA-1#
    ASA-1# sh int eth0/0
    Interface Ethernet0/0 "outside", is up, line protocol is up
      Hardware is i82546GB rev03, BW 1000 Mbps, DLY 10 usec
    Auto-Duplex(Full-duplex), Auto-Speed(1000 Mbps)
    Input flow control is unsupported, output flow control is unsupported
    MAC address 0013.c480.6b50, MTU 1500
    IP address x.x.x.x, subnet mask 255.255.255.224
    14156274 packets input, 16095096189 bytes, 0 no buffer
    Received 44764 broadcasts, 0 runts, 0 giants
    0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
    0 pause input, 0 resume input
    0 L2 decode drops
    8548524 packets output, 1006461151 bytes, 0 underruns
    0 pause output, 0 resume output
    0 output errors, 64 collisions, 6 interface resets
    95 late collisions, 627 deferred
    0 input reset drops, 0 output reset drops, 0 tx hangs
    input queue (blocks free curr/low): hardware (255/230)
    output queue (blocks free curr/low): hardware (255/125)
      Traffic Statistics for "outside":
    14156267 packets input, 15839536990 bytes
    8548619 packets output, 820243613 bytes
    39502 packets dropped
          1 minute input rate 2 pkts/sec,  349 bytes/sec
          1 minute output rate 2 pkts/sec,  425 bytes/sec
          1 minute drop rate, 0 pkts/sec
          5 minute input rate 2 pkts/sec,  2091 bytes/sec
          5 minute output rate 1 pkts/sec,  352 bytes/sec
          5 minute drop rate, 0 pkts/sec

  • Changing ip of physical and logical host

    I have a 2 nodes Sun cluster 2.2 working. I have to put it in another lan, so I have to change both Physical and logical host and terminal concentrator and console ip addresses. How can I do?
    Thanks

    I don't see a reply so I'll take a shot. Just like you have 0-9 for numbers and you can arrange them anyway you want, you still only have 0-9. You have the physical (system block and datafiles) which are somewhat static. Then you have the the logical (tablespace, segements, extents)which are volatile and connected by chaining (links) which may be all over the place in the physical but appears to be one whole unit in the logical. Just like an image on a TV screen wearing a red hat, you see a red hat (logical) but it is actual pixels which are chained or linked (data pointers) to each other by the color red. I hope that helps until someone comes by with a better answer.

  • How to create a logical and a physical path?

    Hi ,
    I want to know how is  a logical path created ?
    Also , how can a physical path be created from a provided logicall path?
    Regards,
    Harshit Rungta

    Hi,
    Use FM u2018FILE_GET_NAMEu2019 to assign the physical file name using a logical file name.
    Remember you need to create u2018logical file nameu2019 using transaction u2013 FILE.
    R/3 applications run on various platforms with various file systems. Hence we use platform-independent logical file names in our application programs.
    Function module u2018FILE_GET_NAMEu2019 converts a logical file name to the corresponding physical file name and path for the hardware platform concerned.
    For this conversion to work for different platforms, the definition of a logical file name must include a logical file path, which in turn is converted to different physical file paths, depending on the particular platform. The platform-specific file name returned by the function module is composed of the physical file path for the current platform and the physical file name associated with the logical file name. Placeholders in physical file and path names are substituted at runtime by the corresponding current values.
    Example
    logical file name:     MONTHLY_SALES_FILE
    physical file name:     VALUES<PARAM_1>
    logical path:     SALES_DATA_PATH
    physical path (UNIX):     /usr/<SYSID>/<FILENAME>
    physical path (Windows):     C:\SALES\<FILENAME>.
    Kind Regards,
    Nikhil J.

  • Difference between physical and logical standby database

    What is the difference between physical and logical standby database?

    Hi,
    Physical Standy where its a read only DB.
    Logs are applied.
    Logical Standy where it can be Read / Write DB and the logs are applied in terms of SQL Statements.
    Thanks & Regards,
    Pavan Kumar N

Maybe you are looking for

  • Can you move events from one drive to another easily?

    I've just bought an external firewire drive to dedicate to my Sony HD camcorder material. I've only imported 30 mins or so of stuff onto the Mac's internal drive with iMovie '08, and would like to know if anyone's managed to easily copy 'events' from

  • Restricting the authorization Object for B2B Transactions

    Hi All we are facing the problem in the ISA b2b app, actually the scenario is as below. we have various transaction types like b2b sales,Peoplesoft order,Request for Order change, RMA ,Request for Quotation(RFQ) and metel order. As per the requiremen

  • Customer Transactions - Balance

    Hi, Can anybody assist me to do the following. 1. How to get the customer Balance ( opening balance) 2. all the transactions in a particular period (inv, payments, adjustment) 3. for closing balance ( opening balance + sum(transaction amounts)) I kno

  • Bank details if bank country is Greece

    Hello, Anybody knows how to enter bank details if the bank is from Greece? I have IBAN number but I don´t know how to enter bank key and bank account. Thanks!!

  • Function Module - SAP ECC to BODS

    Hi Experts - I have one function module in ECC system which is RFC enabeled and I have imported that in BODS. Its showing me under function under ECC Data Store. I am not able to drag it into normal data flow  ,,,, Why ? Is there any special data flo