IP SLA FTP higher RTT than expected

We have setup IP SLA FTP on a few devices to test it as a means for monitoring our WAN connections, ensuring they are providing the bandwidth purchased. This info is collected by Solarwinds Orion for reporting purposes.  What we have found though is that the reported RTT is far higher than it should be which makes it insuitable as a mechanism for montoring WAN bandwdith. 
One of the devices, a 4500X, has a 10Gb path back to the FTP server it's pulling the file from.
This is the output from show ip sla stat:
IPSLAs Latest Operation Statistics
IPSLA operation id: 40002
Latest RTT: 14978 milliseconds
Latest operation start time: 11:07:07 GMT Wed Dec 11 2013
Latest operation return code: Over threshold
Number of successes: 6
Number of failures: 0
Operation time to live: Forever
On the same 4500X a copy ftp null of the same file from the same FTP server results in:
Accessing ftp://*****:*****@10.246.0.11/test...
Loading test !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
[OK - 12407926/4096 bytes]
12407926 bytes copied in 22.320 secs (555911 bytes/sec)
Finally, if I try to FTP the same file from my PC which only has a 1Gb NIC, but after going though another switch goes through the same 4500X taking the same path to the FTP server, I get:
ftp> get test
200 PORT command successful.
125 Data connection already open; Transfer starting.
226 Transfer complete.
ftp: 12407926 bytes received in 0.58Seconds 21246.45Kbytes/sec.
So, why is the 4500X (and the other Cisco devices we have tried) so much slower?  And is there a way to get realistic RTT values from IP SLA, so we can use it to take meaningful measurements of our WAN?
Thanks for any help!
Steve

We have setup IP SLA FTP on a few devices to test it as a means for monitoring our WAN connections, ensuring they are providing the bandwidth purchased. This info is collected by Solarwinds Orion for reporting purposes.  What we have found though is that the reported RTT is far higher than it should be which makes it insuitable as a mechanism for montoring WAN bandwdith. 
One of the devices, a 4500X, has a 10Gb path back to the FTP server it's pulling the file from.
This is the output from show ip sla stat:
IPSLAs Latest Operation Statistics
IPSLA operation id: 40002
Latest RTT: 14978 milliseconds
Latest operation start time: 11:07:07 GMT Wed Dec 11 2013
Latest operation return code: Over threshold
Number of successes: 6
Number of failures: 0
Operation time to live: Forever
On the same 4500X a copy ftp null of the same file from the same FTP server results in:
Accessing ftp://*****:*****@10.246.0.11/test...
Loading test !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
[OK - 12407926/4096 bytes]
12407926 bytes copied in 22.320 secs (555911 bytes/sec)
Finally, if I try to FTP the same file from my PC which only has a 1Gb NIC, but after going though another switch goes through the same 4500X taking the same path to the FTP server, I get:
ftp> get test
200 PORT command successful.
125 Data connection already open; Transfer starting.
226 Transfer complete.
ftp: 12407926 bytes received in 0.58Seconds 21246.45Kbytes/sec.
So, why is the 4500X (and the other Cisco devices we have tried) so much slower?  And is there a way to get realistic RTT values from IP SLA, so we can use it to take meaningful measurements of our WAN?
Thanks for any help!
Steve

Similar Messages

  • Number of Records Loaded for 2LIS_02_HDR much higher than expected

    Hi, I just did a setup for 2LIS_02_HDR and in table EKKO has 1.2 million and in bw it looks like it is loading well over 6 million right now.  Is this because the HDR collects up detail information for header aggregation?  Even with that I don't expect 6 million records.
    Any ideas?  I'm fairly certain I put in all the right parameters in my setup load!  Any way to check?
    thanks!

    >
    Kenneth Murray wrote:
    > Hi, I just did a setup for 2LIS_02_HDR and in table EKKO has 1.2 million and in bw it looks like it is loading well over 6 million right now.  Is this because the HDR collects up detail information for header aggregation?  Even with that I don't expect 6 million records.
    >
    > Any ideas?  I'm fairly certain I put in all the right parameters in my setup load!  Any way to check?
    >
    > thanks!
    Hi,
    please check following :
    1. setup most not be filled repeatdly.... in case if its filled multiple times it wil give u higher no. of records than expected. Kindly check in number of entries in table "MC11VA0HDRSETUP".
    2. If records are higher than expected than please delete data for application 02 using tcode LBWG and refill it.
    Thanks
    Dipika

  • Network Utilization Lower Than Expected

    We are imaging systems using PXE and have 8 of them going at the same time.  We looked on the server and it had like 200 mbps usage.  I would expect it to be much higher (we have 2 - 1 gig nics on the server).  
    Is there something that throttles this?  Any thoughts?

    Create this in registry on Your PXE WDS Server:
    HKEY_LOCAL_MACHINE\Software\Microsoft\SMS\DP\RamDiskTFTPBlockSize
    Type Reg_Dword
    Value: 16384 Dec          (Do not use higher value than this!)
     ((Recommend that you increase this setting in multiples (4096, 8192, 16384, and so on) and that you not set a value higher than 16384.))
    Juke Chou
    TechNet Community Support

  • How to stop process chain, if it is taking too much time than expected.

    Some times if a process chain takes to much time to finish than expected, how I can stop the process chain and execute it again.
    Thanks in Advance.
    Harman

    how I can stop the process chain ??
    If the job is running for a long time ,
    1)GOTO RSMO and SM37 and check the long running job over there.
    2)There you can see the status of the job.
    3)If the job is still running you can kill that job
    4)delete the failed request from data target.
    for more details go to this below link
    how to stop process chain if it yellow for long time
    how I can execute it again ?
    GOto Function module  RSPC_API_CHAIN_START
    and give u r process chain name there.and execute.

  • Down payment amount cannot be higher/lower than the preset value Error

    Hello.  Can anyone help me with this error.  When creating a down payment request from a sales order with a billing plan, I get the message 'Down payment amount cannot be higher/lower than the preset value' and the document isn't released to accounting.  When creating the sales order, I included a condition type that I created.  The billing plan then calculates the down payment request value with the total sum i.e net value plus condition value.  Upon creating the down payment request in VF01, I get this message.  I did a test and noticed that, if the condition is not included in the sales order, the down payment request is created without errors.  How do I correct this?  Thanks for you anticipated response.

    Hi
    As you are getting error, 'Down payment amount cannot be higher/lower than the preset value' ,please check the total value of the document.So in the sales document ,billing plan check what is the invoice value for the first billing date , also check what is the sales order value.Secondly what is the condition type that has been created and for what purpose it is being used.
    Regards
    Srinath

  • Schema version is lower than expected value

    While configuring the database at Step 3 of 9, it threw me an exception: INST-6177 OIM Schema version is lower than expected value.
    Create OIM 11g schema using repository creation utility and proceed with configuration.
    Now, Please help me...

    For the exception, the trace says that:
    [2011-05-19T14:29:10.511+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.validation.oracle.as.install.engine.modules.validation.handler.oimQueriesHandler.checkForUpgrade] [tid: 11] [ecid: 0000J08SN_J6UOYFLrvH8A1DpHfc000002,0] [[
    [OIM_CONFIG_INTERVIEW] MDS Schema Version is correct
    [2011-05-19T14:29:10.527+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.validation.oracle.as.install.engine.modules.validation.handler.oimQueriesHandler.checkForUpgrade] [tid: 11] [ecid: 0000J08SN_J6UOYFLrvH8A1DpHfc000002,0] Exiting method executeHandler
    [2011-05-19T14:29:10.527+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.validation.oracle.as.install.engine.modules.validation.handler.oimQueriesHandler.checkForUpgrade] [tid: 11] [ecid: 0000J08SN_J6UOYFLrvH8A1DpHfc000002,0] [[
    [OIM_CONFIG_INTERVIEW] Database is not encryped. This is not an upgrade flow.
    [2011-05-19T14:29:10.527+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.validation.oracle.as.install.engine.modules.validation.handler.oimQueriesHandler.checkForUpgrade] [tid: 11] [ecid: 0000J08SN_J6UOYFLrvH8A1DpHfc000002,0] Could not fetch the schema version from the database
    [2011-05-19T14:29:10.527+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.validation.oracle.as.install.engine.modules.validation.handler.oimQueriesHandler.checkForUpgrade] [tid: 11] [ecid: 0000J08SN_J6UOYFLrvH8A1DpHfc000002,0] [[
    ERROR ====>>>>:INST-6177
    [2011-05-19T14:29:10.527+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.validation.oracle.as.install.engine.modules.validation.handler.oimQueriesHandler.checkForUpgrade] [tid: 11] [ecid: 0000J08SN_J6UOYFLrvH8A1DpHfc000002,0] [[
    Cause:OIM Schema version is lower than the expected value
    [2011-05-19T14:29:10.527+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.validation.oracle.as.install.engine.modules.validation.handler.oimQueriesHandler.checkForUpgrade] [tid: 11] [ecid: 0000J08SN_J6UOYFLrvH8A1DpHfc000002,0] [[
    Action:Create OIM 11g schema using Repository Creation Utility and proceed with configuration.
    [2011-05-19T14:29:10.527+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.validation.oracle.as.install.engine.modules.validation.handler.oimQueriesHandler.checkForUpgrade] [tid: 11] [ecid: 0000J08SN_J6UOYFLrvH8A1DpHfc000002,0] [[
    [OIM_CONFIG_INTERVIEW] Retrieving default locale set in the machine.
    [2011-05-19T14:29:10.527+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.validation.oracle.as.install.engine.modules.validation.handler.oimQueriesHandler.checkForUpgrade] [tid: 11] [ecid: 0000J08SN_J6UOYFLrvH8A1DpHfc000002,0] Exiting method executeHandler
    [2011-05-19T14:29:10.527+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.validation] [tid: 11] [ecid: 0000J08SN_J6UOYFLrvH8A1DpHfc000002,0] Handler launch end: oimQueriesHandler.checkForUpgrade
    [2011-05-19T14:29:10.527+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.validation] [tid: 11] [ecid: 0000J08SN_J6UOYFLrvH8A1DpHfc000002,0] Handler returned status: FAILED
    [2011-05-19T14:29:10.527+01:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.validation] [tid: 11] [ecid: 0000J08SN_J6UOYFLrvH8A1DpHfc000002,0] Error in validating schema details

  • Lower than expected 1310 bridge performance

    Hi All,
    We have recently installed an wireless WAN link over 7 km (aproximate 4.5 milles) distance. The towers are sufficent for fresnel and earth bulge and obstacles requirements.
    We have used Cisco Aironet 1310 bridges (one configured as root bridge, the other as non-root bridge). The antennas are 24 dBi from Hyperlink, connected by RP-TNC to N pigtails, also provided by Hyperlink. According the Cisco utilities, the antennas has more than enough gain for the distance, climatic and topographic conditions.
    The alignement of the antennas was made visually (using binoculars) on one side, on the other with precision instruments.
    To select the frequency (the automatic selection works awfuly bad, it establish connection one of three times) we use the Carrier Busy Test on the network interface / Radio 802.11G menu, where we are able to pick the frequency with zero percent (0%) utilization most of the time.
    The problems we experiment are the following:
    - The latency times are very variable. From 1 ms, to 207 ms, to other values. We have used other wireless equipment with more stable performance.
    - There are lost packets frequently, even with the slightest network traffic (2 PC's with terminal service sessions). Supousedly, the link is of 54 Mbps speed, but the quality is very low.
    - At the moment, we are losing connection every couple of minutes. After that the non-root configured bridge stays down and the only way to reestablish connection is by reseting the 1310.
    - I see that the speed changes every time I refresh the IE utility.
    I still havent configured any security, because the awful way the link works.
    What could be the reason for such bad performance?
    Best Regards,
    Igor Sotelo.

    Hi All,
    I have noticed that when the switches have VLAN configured, the latency is very variable. In this particular case, the switches does not have this configuration. I haven't configured VLAN's on the wireless bridges either.
    I will test the link without any switches, it's an good idea. Perhaps there is "something" with the wired network.
    Other than that the specifics are:
    - I have used Belden RG-6 1530A cable that has even higher grade than the Belden 9077 recommended by the manuals.
    - The extension of the cables is around 40 meters (120 foot).
    - We have installed another wireless link in the place, that uses the same frequency, but different polarity. Also we try to separate the channels at least 7 frequencies.
    - The other system is omnidireccional in nature, and it doesn't have excessive gain.
    - On the place where the equipment of both systems coexist, the physical separation between the antennas of the two systems is around 9 foots (3 meters).
    The error messages we get are:
    - On the root bridge:
    Mar 1 00:00:57.238 Information Interface Dot11Radio0, Deauthenticating Station 0017.0ec6.a590 Reason: Previous authentication no longer valid
    Mar 1 00:00:57.237 Warning Packet to client 0017.0ec6.a590 reached max retries, removing the client
    - On the non-root bridge:
    Mar 1 16:19:52.072 Notification Line protocol on Interface Dot11Radio0, changed state to up
    Mar 1 16:19:51.072 Error Interface Dot11Radio0, changed state to up
    Mar 1 16:19:51.071 Warning Interface Dot11Radio0, Associated To AP Central-2 0017.0ec6.a580 [None]
    Mar 1 16:16:00.255 Warning Interface Dot11Radio0, cannot associate: No Response
    Mar 1 16:15:50.397 Notification Line protocol on Interface Dot11Radio0, changed state to down
    Mar 1 16:15:49.398 Error Interface Dot11Radio0, changed state to down
    Mar 1 16:15:49.397 Warning Interface Dot11Radio0, parent lost: Too many retries
    Mar 1 16:14:56.788 Notification Line protocol on Interface Dot11Radio0, changed state to up
    Mar 1 16:14:55.788 Error Interface Dot11Radio0, changed state to up
    Mar 1 16:14:55.788 Warning Interface Dot11Radio0, Associated To AP Central-2 0017.0ec6.a580 [None]
    When we make a reload on any of the bridges the link reestablishes for some time. With this message on the root bridge:
    Mar 1 00:00:44.194 Information Interface Dot11Radio0, Station NONROOTNAME 0017.0ec6.a590 Reassociated KEY_MGMT[NONE]
    Mar 1 00:00:35.456 Notification Line protocol on Interface Dot11Radio0, changed state to up
    I will appreciate any additional help.
    Best Regards,
    Igor Sotelo.

  • Lower than expected battery life? (pic warning)

    I have the x61s w/ the UltraLight screen and 4-cell cylindrical battery. According to the specs I should be getting up to 4.5h.From what I've heard, lenovo's estimations are actually normally quite accurate.
    Well here's the thing - I probably top out at about 3hrs. This is QUITE a bit lower than expected. Could it be a faulty battery? I'm expecting more out of this baby!
    Message Edited by Kaitlyn2004 on 05-01-2009 07:13 PM
    Message Edited by JaneL on 05-01-2009 10:59 PM

    The 4.5 hours estimate is when you don't use your laptop for anything other than word processing, and dim out your LCD. In addition it would also require you not use any wireless connection or lot of background processes.  
    You can check your battery condition, by going into power manager and look at what is the design capacity and actual capacity?
    Message Edited by JaneL on 05-01-2009 10:59 PM
    Regards,
    Jin Li
    May this year, be the year of 'DO'!
    I am a volunteer, and not a paid staff of Lenovo or Microsoft

  • Error in sql query as "loop has run more times than expected (Loop Counter went negative)"

    Hello,
    When I run the query as below
    DECLARE @LoopCount int
    SET @LoopCount = (SELECT Count(*) FROM KC_PaymentTransactionIDConversion with (nolock) Where KC_Transaction_ID is NULL and TransactionYear is NOT NULL)
    WHILE (
        SELECT Count(*)
        FROM KC_PaymentTransactionIDConversion with (nolock)
        Where KC_Transaction_ID is NULL
        and TransactionYear is NOT NULL
    ) > 0
    BEGIN
        IF @LoopCount < 0
            RAISERROR ('Issue with data in KC_PaymentTransactionIDConversion, loop has run more times than expected (Loop Counter went negative).', -- Message text.
                   16, -- Severity.
                   1 -- State.
    SET @LoopCount = @LoopCount - 1
    end
    I am getting error as "loop has run more times than expected (Loop Counter went negative)"
    Could any one help on this issue ASAP.
    Thanks ,
    Vinay

    Hi Vinay,
    According to your code above, the error message make sense. Because once the value returned by “SELECT Count(*)  FROM KC_PaymentTransactionIDConversion with (nolock) Where KC_Transaction_ID is NULL and TransactionYear is NOT NULL” is bigger than 0,
    then decrease @LoopCount. Without changing the table data, the returned value always bigger than 0, always decrease @LoopCount until it's negative and raise the error.
    To fix this issue with the current information, we should make the following modification:
    Change the code
    WHILE (
    SELECT Count(*)
    FROM KC_PaymentTransactionIDConversion with (nolock)
    Where KC_Transaction_ID is NULL
    and TransactionYear is NOT NULL
    ) > 0
    To
    WHILE @LoopCount > 0
    Besides, since the current query is senseless, please modify the query based on your requirement.
    If there are any other questions, please feel free to ask.
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Why this Query is taking much longer time than expected?

    Hi,
    I need experts support on the below mentioned issue:
    Why this Query is taking much longer time than expected? Sometimes I am getting connection timeout error. Is there any better way to achieve result in shortest time.  Below, please find the DDL & DML:
    DDL
    BHDCollections
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[BHDCollections](
     [BHDCollectionid] [bigint] IDENTITY(1,1) NOT NULL,
     [GroupMemberid] [int] NOT NULL,
     [BHDDate] [datetime] NOT NULL,
     [BHDShift] [varchar](10) NULL,
     [SlipValue] [decimal](18, 3) NOT NULL,
     [ProcessedValue] [decimal](18, 3) NOT NULL,
     [BHDRemarks] [varchar](500) NULL,
     [Createdby] [varchar](50) NULL,
     [Createdon] [datetime] NULL,
     CONSTRAINT [PK_BHDCollections] PRIMARY KEY CLUSTERED
     [BHDCollectionid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    BHDCollectionsDet
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE TABLE [dbo].[BHDCollectionsDet](
     [CollectionDetailid] [bigint] IDENTITY(1,1) NOT NULL,
     [BHDCollectionid] [bigint] NOT NULL,
     [Currencyid] [int] NOT NULL,
     [Denomination] [decimal](18, 3) NOT NULL,
     [Quantity] [int] NOT NULL,
     CONSTRAINT [PK_BHDCollectionsDet] PRIMARY KEY CLUSTERED
     [CollectionDetailid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    Banks
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[Banks](
     [Bankid] [int] IDENTITY(1,1) NOT NULL,
     [Bankname] [varchar](50) NOT NULL,
     [Bankabbr] [varchar](50) NULL,
     [BankContact] [varchar](50) NULL,
     [BankTel] [varchar](25) NULL,
     [BankFax] [varchar](25) NULL,
     [BankEmail] [varchar](50) NULL,
     [BankActive] [bit] NULL,
     [Createdby] [varchar](50) NULL,
     [Createdon] [datetime] NULL,
     CONSTRAINT [PK_Banks] PRIMARY KEY CLUSTERED
     [Bankid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    Groupmembers
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[GroupMembers](
     [GroupMemberid] [int] IDENTITY(1,1) NOT NULL,
     [Groupid] [int] NOT NULL,
     [BAID] [int] NOT NULL,
     [Createdby] [varchar](50) NULL,
     [Createdon] [datetime] NULL,
     CONSTRAINT [PK_GroupMembers] PRIMARY KEY CLUSTERED
     [GroupMemberid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    GO
    ALTER TABLE [dbo].[GroupMembers]  WITH CHECK ADD  CONSTRAINT [FK_GroupMembers_BankAccounts] FOREIGN KEY([BAID])
    REFERENCES [dbo].[BankAccounts] ([BAID])
    GO
    ALTER TABLE [dbo].[GroupMembers] CHECK CONSTRAINT [FK_GroupMembers_BankAccounts]
    GO
    ALTER TABLE [dbo].[GroupMembers]  WITH CHECK ADD  CONSTRAINT [FK_GroupMembers_Groups] FOREIGN KEY([Groupid])
    REFERENCES [dbo].[Groups] ([Groupid])
    GO
    ALTER TABLE [dbo].[GroupMembers] CHECK CONSTRAINT [FK_GroupMembers_Groups]
    BankAccounts
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[BankAccounts](
     [BAID] [int] IDENTITY(1,1) NOT NULL,
     [CustomerID] [int] NOT NULL,
     [Locationid] [varchar](25) NOT NULL,
     [Bankid] [int] NOT NULL,
     [BankAccountNo] [varchar](50) NOT NULL,
     CONSTRAINT [PK_BankAccounts] PRIMARY KEY CLUSTERED
     [BAID] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    GO
    ALTER TABLE [dbo].[BankAccounts]  WITH CHECK ADD  CONSTRAINT [FK_BankAccounts_Banks] FOREIGN KEY([Bankid])
    REFERENCES [dbo].[Banks] ([Bankid])
    GO
    ALTER TABLE [dbo].[BankAccounts] CHECK CONSTRAINT [FK_BankAccounts_Banks]
    GO
    ALTER TABLE [dbo].[BankAccounts]  WITH CHECK ADD  CONSTRAINT [FK_BankAccounts_Locations1] FOREIGN KEY([Locationid])
    REFERENCES [dbo].[Locations] ([Locationid])
    GO
    ALTER TABLE [dbo].[BankAccounts] CHECK CONSTRAINT [FK_BankAccounts_Locations1]
    Currency
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[Currency](
     [Currencyid] [int] IDENTITY(1,1) NOT NULL,
     [CurrencyISOCode] [varchar](20) NOT NULL,
     [CurrencyCountry] [varchar](50) NULL,
     [Currency] [varchar](50) NULL,
     CONSTRAINT [PK_Currency] PRIMARY KEY CLUSTERED
     [Currencyid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    CurrencyDetails
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[CurrencyDetails](
     [CurDenid] [int] IDENTITY(1,1) NOT NULL,
     [Currencyid] [int] NOT NULL,
     [Denomination] [decimal](15, 3) NOT NULL,
     [DenominationType] [varchar](25) NOT NULL,
     CONSTRAINT [PK_CurrencyDetails] PRIMARY KEY CLUSTERED
     [CurDenid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    QUERY
    WITH TEMP_TABLE AS
    SELECT     0 AS COINS, BHDCollectionsDet.Quantity AS BN, BHDCollections.BHDDate AS CollectionDate, BHDCollectionsDet.Currencyid,
                          (BHDCollections.BHDCollectionid) AS DSLIPS, Banks.Bankname
    FROM         BHDCollections INNER JOIN
                          BHDCollectionsDet ON BHDCollections.BHDCollectionid = BHDCollectionsDet.BHDCollectionid INNER JOIN
                          GroupMembers ON BHDCollections.GroupMemberid = GroupMembers.GroupMemberid INNER JOIN
                          BankAccounts ON GroupMembers.BAID = BankAccounts.BAID INNER JOIN
                          Currency ON BHDCollectionsDet.Currencyid = Currency.Currencyid INNER JOIN
                          CurrencyDetails ON Currency.Currencyid = CurrencyDetails.Currencyid INNER JOIN
                          Banks ON BankAccounts.Bankid = Banks.Bankid
    GROUP BY BHDCollectionsDet.Quantity, BHDCollections.BHDDate, BankAccounts.Bankid, BHDCollectionsDet.Currencyid, CurrencyDetails.DenominationType,
                          CurrencyDetails.Denomination, BHDCollectionsDet.Denomination, Banks.Bankname,BHDCollections.BHDCollectionid
    HAVING      (BHDCollections.BHDDate BETWEEN @FromDate AND @ToDate) AND (BankAccounts.Bankid = @Bankid) AND (CurrencyDetails.DenominationType = 'Currency') AND
                          (CurrencyDetails.Denomination = BHDCollectionsDet.Denomination)
    UNION ALL
    SELECT     BHDCollectionsDet.Quantity AS COINS, 0 AS BN, BHDCollections.BHDDate AS CollectionDate, BHDCollectionsDet.Currencyid,
                          (BHDCollections.BHDCollectionid) AS DSLIPS, Banks.Bankname
    FROM         BHDCollections INNER JOIN
                          BHDCollectionsDet ON BHDCollections.BHDCollectionid = BHDCollectionsDet.BHDCollectionid INNER JOIN
                          GroupMembers ON BHDCollections.GroupMemberid = GroupMembers.GroupMemberid INNER JOIN
                          BankAccounts ON GroupMembers.BAID = BankAccounts.BAID INNER JOIN
                          Currency ON BHDCollectionsDet.Currencyid = Currency.Currencyid INNER JOIN
                          CurrencyDetails ON Currency.Currencyid = CurrencyDetails.Currencyid INNER JOIN
                          Banks ON BankAccounts.Bankid = Banks.Bankid
    GROUP BY BHDCollectionsDet.Quantity, BHDCollections.BHDDate, BankAccounts.Bankid, BHDCollectionsDet.Currencyid, CurrencyDetails.DenominationType,
                          CurrencyDetails.Denomination, BHDCollectionsDet.Denomination, Banks.Bankname,BHDCollections.BHDCollectionid
    HAVING      (BHDCollections.BHDDate BETWEEN @FromDate AND @ToDate) AND (BankAccounts.Bankid = @Bankid) AND (CurrencyDetails.DenominationType = 'COIN') AND
                          (CurrencyDetails.Denomination = BHDCollectionsDet.Denomination)),
    TEMP_TABLE2 AS
    SELECT CollectionDate,Bankname,DSLIPS AS DSLIPS,SUM(BN) AS BN,SUM(COINS)AS COINS  FROM TEMP_TABLE Group By CollectionDate,DSLIPS,Bankname
    SELECT CollectionDate,Bankname,count(DSLIPS) AS DSLIPS,sum(BN) AS BN,sum(COINS) AS coins FROM TEMP_TABLE2 Group By CollectionDate,Bankname
    HAVING COUNT(DSLIPS)<>0;

    Without seeing an execution plan of the query it is hard to suggest something useful. Try insert the result of UNION ALL to the temporary table and then perform an aggregation on that table, not a CTE.
    Just
    SELECT CollectionDate,Bankname,DSLIPS AS DSLIPS,SUM(BN) AS BN,SUM(COINS)AS COINS  FROM
    #tmp Group By CollectionDate,DSLIPS,Bankname
    HAVING COUNT(DSLIPS)<>0;
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • GR needs higher Quantity than STO Purchase order Qunatity

    Dear Friend,
    Pl suggest whicle in case of STO Purchase order, I want to receive higher quantity than PO qunatity.
    After doing all Captured part-I, when Check button is activated..it showing
    PL Stock in transit exceeded by 0.500 MT : FIN500 1000 FSMS
    Message no. M7022
    Even I tried to change the system message in OMCQ in warning message, still showing the same.
    Pl suggest.
    Jyoti Bhushan Sharma

    Hi,
    Increase the tolerance limit for the material in the PO under material data tab or check the unlimited field under the material data for the material.
    Regards,
    AM

  • Uninstall/install 3810/3805 or higher other than 10

    I need to uninstall java 3810 and install 3805 or higher other than 10, if you can help please advise in lay terms to [email protected]

    found this after some browsing. Haven't tried it, so no garantees of course
    "Microsoft Virtual Machine (VM) Removal
    By Kurt Koller, February 12, 2003.
    It would appear that Sun and Microsoft have finally battled out this chapter of the Java saga, resulting in a situation where the Microsoft Virtual Machine will no longer be distributed.
    Microsoft has a whole FAQ for developers, but they leave out one bit of information: how to actually uninstall the VM from your system.
    Removing this is vital for me, because I need to make 100% sure that any migration I've done is correct, and the only way to be 100% sure that we're never using the Microsoft VM is to remove it.
    Sure, we could port some stuff to J#/.NET and we are allowed to redistribute the VM with our products, but we'd rather just say good riddance. So here's how (but don't blame me if you trash something):
    Instructions
    Start -> Run...
    Key in the following and hit return: rundll32 advpack.dll,LaunchINFSection java.inf,UnInstall
    You will then get a prompt to uninstall (a scary one telling you that IE will no longer be able to download files, which is bogus), choose yes, and then when finished it will want to reboot. Let it.
    To remove the residual traces you may have of the Microsoft VM, remove the following (where %WINDOWS% is your system directory, usually C:\WINDOWS\ or C:\WINNT\ depending on OS:
    * %WINDOWS%\java (entire folder)
    * %WINDOWS%\inf\java.inf (may have been deleted by uninstall)
    * %WINDOWS%\inf\java.pnf (may have been deleted by uninstall)
    * Search your system drive for "javavm.dll" and remove it (may have been deleted by uninstall)
    That's it. Then go install the VM of your choice.
    If you have problems with IE continually telling you that you need to install a VM even if you already have one installed, turn off the option "Install on Demand (Other)" in Tools -> Internet Options... -> Advanced.
    "

  • Unable to include report viewer web part. Error is, assembly has a higher version than referenced assembly

    I am making a web part in VS 2010 for SharePoint 2010. The web part uses Report Viewer control. The files I have referenced in my project with "Copy Local = True" option are:
    Microsoft.ReportViewer.Common.dll  
    Microsoft.ReportViewer.WebForms.dll
    Both are version 10 files.
    When I build the project it works fine. But when I add this web part in a page it shows following error.
    Compiler Error Message: CS1705: Assembly 'MySolution, Version=1.0.0.0, Culture=neutral, PublicKeyToken=8acc41a360fa228d' uses 'Microsoft.ReportViewer.WebForms, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' which has a higher version
    than referenced assembly 'Microsoft.ReportViewer.WebForms, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
    I have already installed both Report Viewer 10 redistribute able and its SP1 but no luck. Also copied these DLLs to my SharePoint site bin folder. Did iisreset many times. .Net framework is 3.5 and OS is Windows Server 2008 R2.
    How to fix this issue?

    Fixed it by updating my web.config which was referencing version 9 instead of 10.

  • Other system has higher release than XI syst. in landscape

    Hi,
    We are running an XI 3.0/NW04 system and we're currently planning to upgrade our BW 3.5/NW04 system to BI 7.0/NW04s.
    In the Master for NW04s you can read the following :
    "For PI, it is a prerequisite that no other system in your system landscape has a higher release than the PI system. If you want to upgrade or install an application in your system landscape, you first have to make sure that the current release of the PI system is on the same release level &#8210; if required, you have to upgrade the PI system first to the new or a higher release."
    After having read this, my understanding is that we have to upgrade our XI system before upgrading the BW system.
    What are the consequences/implications running a system landscape where a system has a higher release than the XI-system ?
    Thanks in advance for help and hints.
    Rgds,
    Christian

    Hi Christian,
    The implications are, PI as the highest release is the only supported combination.  It is possible that other combinations may work, but they are not tested by SAP and, in the case of any issues, SAP Support will tell you to upgrade your PI system to the highest release.
    Best Regards,
    Matt

  • More results than expected.

    Hi!
    With the below shown "SELECT-QUERY"  I get
    more results than expected.
    What can the reason be for that ?
    I get 72 entries instead of 12 and lines multiple. 
    When go via se16 and type in the materialnumber
    I get shown only 12 entries out of all below
    listed tables.
    Regards
    Ilhan
    SELECT a~matnr
    a~herkl a~herkr
    b~exprf b~valid
    c~stawn c~valid
    c~prrfm c~peinh
    d~prwrk d~peinh
    INTO CORRESPONDING FIELDS OF TABLE ldm_all
    FROM LDM_LF AS a
    INNER JOIN LDM_VL_LFKD AS b ON
    a~matnr = b~matnr
    INNER JOIN LDM_VL_MAT AS  c ON
    a~matnr =  c~matnr
    INNER JOIN LDM_VL_KD AS   d ON
    a~matnr = d~matnr
    WHERE a~matnr IN mat AND
          b~valid IN dat.

    SELECT a~matnr a~kunnr
    a~herkl a~herkr
    b~exprf b~valid
    INTO CORRESPONDING FIELDS OF TABLE ldm_all1
    FROM LDM_LF AS a
    INNER JOIN LDM_VL_LFKD AS b ON
    a~matnr = b~matnr
    a~wlfnr = b~wlfnr
    WHERE a~matnr IN mat AND
                 b~valid IN dat.
    If sy-subrc = 0.
    SELECT  c~stawn c~valid
            c~prrfm c~peinh
            d~prwrk d~peinh
         INTO CORRESPONDING FIELDS OF TABLE ldm_all2
            FROM LDM_VL_MAT AS  c
             INNER JOIN  LDM_VL_KD AS   d ON
                            c~matnr = d~matnr
                            c~LDMQL = d~LDMQL
                            c~valid   = d~valid
             FOR ALL ENTRIES IN i_ldm_all2
            WHEREc~matnr = i_ldm_all2-matnr AND
                        c~kunnr  = i_ldm_all2-kunnr AND
                         c~LDMQL = i_ldm_all2-ldmql AND
                          c~valid = i_ldm_all2-valid.
    ENDIF.
    Hope this solves ur query.

Maybe you are looking for