Data Warehouse - Capturing a Period

I been a bit stuck; maybe I am over thinking this scenerio; In creating a Data Warehouse Fact I am used to the idea of definitive dates (example: I sold x of product h today) in this case you would have a difinitive date for each occurance and could relate to a time dimension; My current scenerio:
Customers have services; they may have multiple features on service and change features on the service, In case of change, old feature ended and new started; So I guess I can key on this change - My only issue I keep on circling back on, in case of inactive services for historical values; and how this would all come together. I just want to make sure I store is properly and foundation is expandable - Any feedback would be greatly appreciate

No response; found resolution

Similar Messages

  • Analyze command in a Data warehouse env

    We are doing data loads daily on our Data warehouse. On certain target tables, we have change data capture enabled. As part of loading the table ( 4 million rows total) , we remove data for a certain time period ( say a month = 50,000+ rows) and loading that again from the source. We are also doing a full table analyze part of this load and is taking a long time.
    Question is : Do we need to do the analyze command every day ? Is there a big change we would see if we run the analyze once a week ?
    Thanks.

    Hi srwijese,
    My DW actually has 12TBs and after each dataload we do stats collection from our tables, BUT, we have partitioned tables in most of cases, so we just collect it at partition level using dbms_stat package. I don't know if your enviroment is partitioned or not, if yes, do a stats collection just for partition loaded.
    P.S: If you wish add [email protected] (MSN) to share experiences.
    Jonathan Ferreira
    http://oracle4dbas.blogspot.com

  • Configuration Dataset = 90% of Data Warehouse - Event Errors 31552

    Hi All,
    I'm currently running SCOM 2012 R2 and have recently had some problems with the Data Warehouse Data Sync. We currently have around 800 servers in our production environment, no Network devices, we use Orchestrator for integration with our call logging system
    and I believe this is where our problems started. We had a runbook which got itself into a loop and was constantly updating alerts, it also contributed to a large number of state changes. We have resolved that problem now, but I started to receive alerts
    saying SCOM couldn't sync Alert data under event 31552.
    Failed to store data in the Data Warehouse.
    Exception 'SqlException': Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding. 
    One or more workflows were affected by this.  
    Workflow name: Microsoft.SystemCenter.DataWarehouse.StandardDataSetMaintenance 
    Instance name: Alert data set 
    Instance ID: XX
    Management group: XX
    I have been researching problems with syncing alert data, and came across the queries to manually do the database maintenance, I ran that on the alert instance and it took around 16.5 hours to run on the first night, then it ran fast (2 seconds) most the
    day, when it got to about the same time the next day it took another 9.5 hours to run so I'm not sure why that's giving different results.
    Initially it appeared all of our datasets were out of sync, after the first night all appear to be in sync bar the Hourly Performance Dataset. Which still has around 161 OutstandingAggregations. When I run the Maintenance on Performance it doesn't appear
    to be fixing it. (It runs in about 2 seconds, successfully)
    I recently ran DWDatarp on the database to see how the Alert Dataset was looking and to my surprise I found that the Configuration Dataset has blown out to take up 90% of the DataWarehouse, table below. Does anyone have any ideas on what might cause this
    or how I can fix it?
    Dataset name                   Aggregation name     Max Age     Current Size, Kb
    Alert data set                 Raw data                 400       132,224 (  0%)
    Client Monitoring data set     Raw data                  30             0 (  0%)
    Client Monitoring data set     Daily aggregations       400            16 (  0%)
    Configuration dataset          Raw data                 400   683,981,456 ( 90%)
    Event data set                 Raw data                 100    17,971,872 (  2%)
    Performance data set           Raw data                  10     4,937,536 (  1%)
    Performance data set           Hourly aggregations      400    28,487,376 (  4%)
    Performance data set           Daily aggregations       400     1,302,368 (  0%)
    State data set                 Raw data                 180       296,392 (  0%)
    State data set                 Hourly aggregations      400    17,752,280 (  2%)
    State data set                 Daily aggregations       400     1,094,240 (  0%)
    Microsoft.Exchange.2010.Dataset.AlertImpact Raw data      
    7     0 (  0%)
    Microsoft.Exchange.2010.Dataset.AlertImpact Hourly aggregations        
    3     0 (  0%)
    Microsoft.Exchange.2010.Dataset.AlertImpact Daily aggregations      
    182     0 (  0%)
    Microsoft.Exchange.2010.Reports.Dataset.Availability Raw data                 400           176 (  0%)
    Microsoft.Exchange.2010.Reports.Dataset.Availability Daily aggregations       400             0 (  0%)
    Microsoft.Exchange.2010.Reports.Dataset.TenantMapping Raw data 7             0 (  0%)
    Microsoft.Exchange.2010.Reports.Dataset.TenantMapping Daily aggregations       400             0 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data Rawdata                   3        84,864 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data Hourly aggregations        7       407,416 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ActiveUserMailflowStatistics.Data Daily aggregations       182       143,128 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data Raw data                   7         6,088 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data Hourly aggregations       31        20,056 (  0%)
    Microsoft.Exchange.2010.Reports.Transport.ServerMailflowStatistics.Data Daily aggregations       182         3,720 (  0%)
    I have one other 31553 event showing up on one of the Management servers as follows,
    Data was written to the Data Warehouse staging area but processing failed on one of the subsequent operations.
    Exception 'SqlException': Sql execution failed. Error 2627, Level 14, State 1, Procedure ManagedEntityChange, Line 368, Message: Violation of UNIQUE KEY constraint 'UN_ManagedEntityProperty_ManagedEntityRowIdFromDAteTime'. Cannot insert duplicate key in
    object 'dbo.ManagedEntityProperty'. The duplicate key value is (263, Aug 26 2013  6:02AM). 
    One or more workflows were affected by this.  
    Workflow name: Microsoft.SystemCenter.DataWarehouse.Synchronization.ManagedEntity 
    Instance name: XX 
    Instance ID: XX
    Management group: XX
    which from my readings means I'm likely in for an MS support call.. :( But I just wanted to see if anyone has any information about the Configuration Dataset as I couldn't find much in my searching.

    Hi All,
    The results of the MS Support call were as follows, I don't recommend doing these steps without an MS Support case, any damage you do is your own fault these particular actions resolved our problems:
    1. Regarding the Configuration Dataset being so large. 
    This was caused by our AlertStage table which was also very large, we truncated the alert stage table and ran the maintenance tasks manually to clear this up. As I didn't require any of the alerts sitting in the AlertStage table we simply did a straight truncation
    of the the table. The document linked by MHG above shows the process of doing a backup & restore on the AlertStage Table if you need to. It took a few days of running maintenance tasks to resolve this problem properly. As soon as the truncation had taken
    place the Confirguration Dataset dropped in size to less than a gig.
    2. Error 31553 Duplicate Key Error
    This was a problem with duplicate keys in the ManagedEntityProperty table. We identified rows which had duplicate information, which could be gathered from the Events being logged on the Management Server.
    We then updated a few of these rows to have a slightly different time to what was already in the Database. We noticed that the event kept logging with a different row each time we updated the previous row. We ran the following query to find out how many rows
    actually had duplicates:
    select * from ManagedEntityProperty mep
    inner join ManagedEntity me on mep.ManagedEntityRowId = me.ManagedEntityRowId
    inner join ManagedEntityStage mes on mes.ManagedEntityGuid = me.ManagedEntityGuid
    where mes.ChangeDateTime = mep.FromDateTime
    order by mep.ManagedEntityRowId
    This returned over 25,000 duplicate rows. Rather than replace the times for all the rows, we removed all duplicates from the database. (Best to have MS Check this one out for you if you have a lot of data)
    After doing this there was a lot of data moving around the Staging tables (I assume from the management server that couldn't communicate properly), so once again we truncated the AlertStage table as it wasn't keeping up. Once this was done everything worked
    properly and all the queues stayed under control.
    To confirm things had been cleared up we checked the AlertStage table had no entries and the ManagedEntityStage table had no entries. We also confirmed that the 31553 events stopped on the Management server.
    Hopefully this can help someone, or provide a bit more information on these problems.

  • Update data automatically in fact table in Data Warehouse

    Hi,
    I'm working on the creation of a data warehouse that include different data source like SQL Server performance (more than one), Active Directory users, Server performance (more than one), Exchange server mailboxes. The problem is that performance data change
    frequently (like CPU and Memory), so my question is how to update data in fact table every 5 seconds automatically with SSIS.
    Thank you for any advice  

    I'm assuming you have already figured out how to capture the data e.g. Powershell, extended events, MDW etc. and just need to know what dimensions or fact tables do you need.
    You need to decide how often you are going to capture this data and based on that you will have dimensions with appropriate grain. Don't try to cram everything in the same fact table if it not of the same granularity. Also, separate process usually
    have separate fact tables.
    In addition to the Date dimension, you will need a Time dimension with a grain of 1 second (or maybe 5 seconds if that is when you get your data) then run the SSIS every 5 seconds to capture and append that data in the fact table.
    - Aalamjeet Rangi | (Blog)

  • Accessing Data Warehouse with HTML DB

    I have a test data warehouse database 10g comprising of seven dimension tables and one fact table. When I access one table at a time, the query runs fine, but when I join two dimension tables or more to the fact table, the result set comes out wrong. The performance is also very poor. Is HTML DB not capable of properly accessing a data warehouse data?
    Here is the query I'm having problem with:
    SELECT p.prod_name, s.store_name, pr.week, sl.dollars
    FROM sales sl, product p, period pr, store s
    WHERE p.prodkey = sl.prodkey
    AND pr.perkey = sl.perkey
    AND p.prod_name LIKE 'Assam Gold%'
    OR p.prod_name LIKE 'Earl%'
    AND s.store_name LIKE 'Instant%'
    AND pr.month = 'NOV'
    AND pr.year = 2003
    ORDER BY p.prod_name, sl.dollars DESC
    Your input would be appreciated.

    I doubt this was intentional, but you are not joining the store table to anything. You do filter the rows from that table with the AND s.store_name LIKE 'Instant%' predicate, but it is not joined to any of the other 3 tables. Your query will essentially return the number of rows from the other 3 tables multiplied by the number of rows returned from store. SYou might think about grouping some of your predicates for readability and possibly for correct logic.SELECT p.prod_name, s.store_name, pr.week, sl.dollars
      FROM sales sl, product p, period pr, store s
    WHERE p.prodkey = sl.prodkey
       AND pr.perkey = sl.perkey
       -- Add missing predicate here
       -- AND s.something = sl,p, or pr .something
       -- end missing predicate
       AND (p.prod_name LIKE 'Assam Gold%'
            OR
            p.prod_name LIKE 'Earl%')
       AND s.store_name LIKE 'Instant%'
       AND pr.month = 'NOV'
       AND pr.year = 2003
    ORDER BY p.prod_name, sl.dollars DESCHope this helps,
    Tyler

  • CSR Historical Data not capturing data?

    Hi All,
    Last week (on Thursday) our Historical Data stopped capturing any details? 
    I have captured a screen shot of the report.  You can see that the agent logged in time is 111:47:23 (at 3:47:23pm EST).  Any help fixing the issue would be greatly appreciated!
    Thanks in advance!
    Craig

    Hi Craig,
    Subscriber Goes Down
    When the subscriber goes down for more than the 2- or 4-day retention period,
    reinitialize the subscriber in CRS Administration (Datastore Control Center web
    page) and reinitialize the subscription for all the datastores.
    http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/crs/express_5_0/maintenance/admin/crs501ag.pdf
    Access the Datastore Control Center by selecting System > Datastore Control
    Center from the CRS Administration menu bar.
    Reinit Subscriber—Click this button to reinitialize the subscriber with a
    copy of data from the Publisher. (This causes the data on the subscriber to be
    overwritten by the data from the Publisher.)
    Note Only use this button if you have determined that the Subscriber needs this
    data from the Publisher (if the Subscriber and the Publisher are not
    synchronized).
    Hope this helps.
    Anand
    Please rate all helpful posts by clicking on the stars below the helpful posts !!

  • Data warehouse monitor initial state data synchronization process failed to write state.

    Data Warehouse monitor initial state data synchronization process failed to write state to the Data Warehouse database. Failed to store synchronization process state information in the Data Warehouse database. The operation will be retried.
    Exception 'SqlException': Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
    One or more workflows were affected by this. 
    Workflow name: Microsoft.SystemCenter.DataWarehouse.Synchronization.MonitorInitialState
    Instance name: Data Warehouse Synchronization Service
    Instance ID: {0FFB4A13-67B7-244A-4396-B1E6F3EB96E5}
    Management group: SCOM2012R2BIZ
    Could you please help me out of the issue?

    Hi,
    It seems like that you are encountering event 31552, you may check operation manager event logs for more information regarding to this issue.
    There can be many causes of getting this 31552 event, such as:
    A sudden flood (or excessive sustained amounts) of data to the warehouse that is causing aggregations to fail moving forward. 
    The Exchange 2010 MP is imported into an environment with lots of statechanges happening. 
    Excessively large ManagedEntityProperty tables causing maintenance to fail because it cannot be parsed quickly enough in the time allotted.
    Too much data in the warehouse staging tables which was not processed due to an issue and is now too much to be processed at one time.
    Please go through the links below to get more information about troubleshooting this issue:
    The 31552 event, or “why is my data warehouse server consuming so much CPU?”
    http://blogs.technet.com/b/kevinholman/archive/2010/08/30/the-31552-event-or-why-is-my-data-warehouse-server-consuming-so-much-cpu.aspx
    FIX: Failed to store data in the Data Warehouse due to a Exception ‘SqlException': Timeout expired.
    Regards,
    Yan Li
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • Streaming OLTP to a Data Warehouse

    I am currently working on a project to use Streams to capture changes from a highly normalized OLTP DB and propagates them to a separate DB that will be in the form of a flat data warehouse design. Streams seems to provide great potential to make the reporting from the warehouse near real-time and to avoid the typical nightly ETL through staging tables. Conceptually, I�d like to have a capture process on the source, have an apply process with a DML handler (changes table name, owner, deletes unnecessary columns, and adds columns of queried data) that re-enqueues the LCR at the source, a propagation process that sends the user defined LCR generated by my DML handler package and finally an apply process at the destination site to populate the data.
    I have several components of this process working but I can�t get it all to come together. The capture process and apply with the DML handler is no problem. But once the message is re-enqueued the trouble begins. It seems like the message is propagating based on the events and bytes propagated displayed in dba_queue_schedules but I never see the LCR hit the destination queue. Is there something specific that needs to be created since this is now a �user-defined� LCR? The apply process on the destination was created with the apply_captured parameter set to false so I though that this would be enough. Do I need to create a special agent to handle this or create a subscriber for the queue? Any help would be greatly appreciated.
    Thanks,
    Bill

    Thanks for suggesting where to look, Patricia. I don�t have any errors in the error queue, I do have data being propagated as indicated by the DBA_QUEUE_SCHEDULES view, there are no errors associated with the propagation and my apply process is running on the destination side. However, I�m not seeing any messages being read from the queue on the destination side. The V$STEAMS_APPLY_READER view has a state of �DEQUEUE MESSAGES� but the total_messages_dequeued = 0. I guess this makes sense since I never see any data being populated in the strmadmin.streams_queue_table in the destination database. I assume that if the data was propagating correctly I�d see an entry here since my source apply process uses a DML handler to re-enqueue the lcr for propagation thus making it a non-captured event?
    Any suggestions of what to look for next?

  • Efficiency of data warehouse sql and star/snowflake schema

    Hi,
    We are using 11.2.0.3 and need to improve query performance of reports.  data warehouse star/snowflake schema
    In addition to indexing, partitioning having star_transformation enabled etc I am condisriing impact of the following on query performance.
    central fact (over a billion rows) joins to a dimesnion customer ( few hundred thousand rows) which in turn joined to latest version of the dimesnion ( whichhas circa 30,000 rows).
    The table with few hundred thousand rows (customer dimesnion) must alwsys be queried as data stored aganist the version of customer applicable at the time - we just query latest_customer as users want to see
    latest version of customer attributes to stop data being fragemented across several rows in the report.
    Considering if would be more efficient to create a dimenson which is the equivalent of customer but also stores the latest version of the customer attributes on the on row - this would mean customer dimensuion would have far more columns but queries would could avoid additional lookup of this 30k row table.
    Thoughts are - would this be a material benefit?
    At monent users would query latest_customer to say get all customers belonging to a certain multiple chain.
    If change as above then they would query directly the customer dimension with few hundred thousand rows.
    Thoughts?
    Thanks

    We are using 11.2.0.3 and need to improve query performance of reports.  data warehouse star/snowflake schema
    That is NOT a realistic or even meaningful goal.
    And until you identify and document an actual PROBLEM or specific goal you should not even be considering possible solutions.
    Anything you do to improve one report might degrade the performance of several other reports.
    You need to start over and gather information about WHAT Oracle is doing for the reports now, HOW that work is being done and capture metrics that validate how the reports are currently performing.
    Your first step should be to document the performance you are getting now for each report.
    The second step would be to identify which of those reports is a possible target for tuning.
    The third step is to prioritize the reports: which is most important to tune, which is next, etc.
    Then you need to generate the execution plans for those reports to identify EXACTLY how Oracle is executing the queries now.
    At this point you should have enough information to know what your possible options are.
    So then you create a prioritized list of options. The top of the list should be additions to what you already have.
    1. New indexes - regular or bitmapped (if appropriate)
    2. Dropping indexes that aren't being used.
    3. Report-ready summary tables or Materializeds views.
    IMHO modifying your basic architecture should be your LAST resort and undertaken only if you can't solve your (unstated) problem using solutions that have less impact and risk.

  • ERP data warehouse

    Hi all,
    What is the requirement of data warehouse if ERP exists? As ERP captures major transactional data then what are the difficulties to analyze data directly from ERP database? What bebefits can be found if data is transformed into data warehouse data model? What are the difficulties to transforme data from ERP to data warehouse?
    Please help me. These are the research questions of my M.Sc thesis.
    Thanks.
    Swapan

    I'm receiving a similar error when attempting to create data warehouse tables with BIAPPS 7.9.6.
    The "Installing the DAC Platform" documentation states:
    "4.9.4.2 How to Create ODBC Connections for Oracle Databases
    Follow these instructions for creating ODBC connections for Oracle databases on
    Windows. For instructions on creating ODBC connections for Oracle databases on
    UNIX or Linux, see the documentation provided with your database.
    Note: You must use the Oracle Merant ODBC driver to create the ODBC connections.
    The Oracle Merant ODBC driver is installed by the Oracle BI Applications installer.
    Therefore, you will need to create the ODBC connections after you have run the Oracle
    BI Applications installer and have installed the DAC Client."

  • SSRS 2008 Column Chart with Calculated Series (moving average) "formula error - there are not enough data points for the period" error

    I have a simple column chart grouping on 1 value on the category axis.  For simplicity's sake, we are plotting $ amounts grouping by Month on the category axis.  I right click on the data series and choose "Add calculated series...".  I choose moving average.  I want to move the average over at least 2 periods.
    When I run the report, I get the error "Formula error - there are not enough data points for the period".  The way the report is, I never have a guaranteed number of categories (there could be one or there could be 5).  When there is 2 or more, the chart renders fine, however, when there is only 1 value, instead of suppressing the moving average line, I get that error and the chart shows nothing.
    I don't think this is entirely acceptable for our end users.  At a minimum, I would think the moving average line would be suppressed instead of hiding the entire chart.  Does anyone know of any workarounds or do I have to enter another ms. connect bug/design consideration.
    Thank you,
    Dan

    I was having the same error while trying to plot a moving average across 7 days. The work around I found was rather simple.
    If you right click your report in the solution explorer and select "View Code" it will give you the underlying XML of the report. Find the entry for the value of your calculated series and enter a formula to dynamically create your periods.
    <ChartFormulaParameter Name="Period">
                      <Value>=IIf(Count(Fields!Calls.Value) >= 7 ,7, (Count(Fields!Calls.Value)))</Value>
    </ChartFormulaParameter>
    What I'm doing here is getting the row count of records returned in the chart. If the returned rows are greater than or equal to 7 (The amount of days I want the average) it will set the points to 7. If not, it will set the number to the amount of returned rows. So far this has worked great. I'm probably going to add more code to handle no records returned although in my case that shouldn't happen but, you never know.
    A side note:
    If you open the calculated series properties in the designer, you will notice the number of periods is set to "0". If you change this it will overwrite your custom formula in the XML.

  • Best practice of metadata table in data warehouse environment ?

    Hi guru's,
    In datawarehouse, we have 1. Stage schema 2. DWH(Data warehouse reporting schema). In stageing we have about 300 source tables. In DWH schema, we are creating the tables which are only required from reporting prespective . some of the tables in stageing schema, have been created in DWH schema as well with different table name and column names. The naming convention for these same tables and columns in DWH schema is more based on business names.
    In order to keep track of these tables we are creating metadata table in DWH schema say for example
    Stage                DWH_schema
    Table_1             Table_A         
    Table_2             Table_b
    Table_3             Table_c
    Table_4              Table_DMy question is how do we handle the column names in each of these tables. The stage_1, stage_2 and stage_3 column names have been renamed in DWH_schema which are part of Table_A, Table_B, Table_c.
    As said earlier, we have about 300 tables in stage and may be around 200 tables in DWH schema. Lot of the column names have been renamed in DWH schema from stage tables. In some of the tables we have 200 column's
    so my concern is how do we handle the column names in metadata table ? Do we need to keep only table names in metadata table not column names ?
    Any idea will be greatly appriciated.
    Thanks!

    hi
    seems quite a buzzing question.
    In our project we designed a hub and spoke like architecture.
    Thus we have 3 layer, L0 is the one closest to the source and L0 table's name are linked to the corresponding sources names by mean of naming standard (like tabA EXT_tabA tabA_OK1 so on based on implementation of load procedures).
    At L1 we have the ODS , normalized model , we use business names for table there and standard names for temporary structures and artifacts
    Both L0 an L1 keep source's column names as general rule, new columns like calculated one are business driven and metadata are standard driven.
    Datamodeler fits perfect for modelling L1 purpose.
    L2 is the dimensional schema business names take place for tables and columns eventually rewritten at presentation layer ( front end tool )
    hope this helps D.

  • Diff b/w Data warehouse and Business Warehouse

    Hi all,
    what is the Diff b/w Data warehouse and Business Warehouse?

    hi..
    The diferrence between Datawarehousing and Business Warehouse are as follows.
    DataWarehousing is the concept and BIW is a tool that uses this concept in Business applicaitons.
    DataWarehousing allows you to analyze tons of data (millions and millions of records of data) in a convinent and optimum way, it is called BIW when applied to Business applications like analyzing the sales of a company.
    Advantages- Consedering the volume of business data, BIW allows you to make decisions faster, I mean you can analyze data faster. Support for multiple languges easy to use and so on.
    Refer this
    Re: WHAT IS THE DIFFERENCE BETWEEN BIW & DATAWAREHOUSING
    hope it helps...

  • Performance issues with data warehouse loads

    We have performance issues with our data warehouse load ETL process. I have run
    analyze and dbms_stats and checked database environment. What other things can I do to optimize performance? I cannot use statspack since we are running Oracle 8i. Thanks
    Scott

    Hi,
    you should analyze the db after you have loaded the tables.
    Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
    If yes:
    make sure your sequence caches (alter sequence s cache 10000)
    Drop all unneeded indexes while loading and disable trigger if possible.
    How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
    Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
    Is it possible using a direct load? Or do you already direct load?
    Dim

  • Please help to get onhand stock report with last purchase and billed date warehouse and item wise

    please help to get onhand stock report with last purchase and billed date warehouse and item wise

    Hi Rajeesh Ambadi...
    Try This
    SELECT distinct T0.ITEMCODE , t1.ItemName, T0.ONHAND as 'Total Qty',  
      T1.LASTPURDAT ,t1.LastPurPrc
    FROM OITW T0 INNER JOIN OITM T1 ON T0.ITEMCODE = T1.ITEMCODE
    INNER JOIN OITB T2 ON T1.ITMSGRPCOD=T2.ITMSGRPCOD left join ibt1 t3 on t3.itemcode = t0.itemcode and t3.whscode = t0.whscode
    WHERE
    T0.ONHAND>0
    AND T0.WhsCode ='[%0]'
    Hope Helpful
    Regards
    Kennedy

Maybe you are looking for

  • Rescaling a PDF document's dimensions?

    I'm wondering how complicated it would be to take an existing PDF (just a single page) and redefine its dimensions, thereby rescaling the dimensions of everything in it in one fell swoop? That is to say, if the original dimensions are 8.5 x 11 inches

  • ICLOUD NOT UPLOADING PHOTOS FROM PC

    Hi, I can not get the photos to upload from the PC to the Icloud. (and hence they are not on the Ipad) I have checkied all the setup tick boxes as per the Icloud PC setup but it's still showing just over 1Gb. And yes Im signed up for 20Gb storage spa

  • Imgaes redirection in Iplanet 6.1 SP4

    Hi all, We have a webapp with iPlanet 6 and Weblogic 8.1. The entire application is deployed on Weblogic, while the iPlanet is just a Web Server directing all requests to the Weblogic server. The iPlanet is configured to use the domain www.some_compa

  • An error occurred while connecting to the AirPlay device "Stereo Airport Express G". The network connection failed.

    When attempting to play iTunes 11.1.2 music wirelessly from my Mac Mini to stereo speakers connected to an AE "G" I get the following message: An error occurred while connecting to the AirPlay device "Stereo Airport Express G".   However, when I atte

  • Skype 7.0.0.102 version not working on Windows 7

    We all use Skype in the office for sending instant messages to each other. One of my colleagues which uses Windows 7 has an issue with Skype where it has stopped working,  I have checked the error log and it is showing this: Description:Faulting appl