Anomaly when query timeout in Data buffer

Hello experts
I'm using MII12.2 and query buffered (allow buffering flag) on a database with several tables locks.
When I get a query timeout (LastErrorMessage of the query is "Response timed out, request terminated"), the query is not queued in the buffer.
Just if I send the query to the buffer setting a wrong used id in the dataserver, then the manager works properly (when I restore the used id, the query completes its logic when the lock is released)
If anybody wants to replicate the enviroment, the query for table lock in SQL Server is
     BEGIN TRANSACTION
     SELECT * FROM [your table name] WITH (TABLOCKX, HOLDLOCK);
     WAITFOR DELAY '00:10:00'
     ROLLBACK TRANSACTION
You have to run it in SQL Manager, not in MII
Sequence:
- Create in MII a generic query with SELECT * FROM [your table name], and enable "allow buffering"
- Call it in a transaction (if you want, with exception handler)
- Run the lock query in SQL Server
- Run the transaction (while the table is locked by SQL Server query) and wait transaction/query error
- Check data buffer: the MII query is not buffered
Is it normal?
Is there any patch?
Thanks
Regards
Fabio

Hi Fabio,
I think the query does not gets buffered because no communication error happened.
MII would buffer an external call if there is a communication error.
But in your case the communication happened fine and the DB did not gave the response and thus time out happened. One more scenario of this type is if we write a query with syntax error. Here also the communication will happen with the DB but the DB will give the error thus MII will not buffer the query.
Regards,
Rohit Negi.

Similar Messages

  • Inconsistent SDO_RELATE results when querying 2.5D data

    Oracle 11.1.0.7 with Patch 8343061 on Windows Server 2003 32bit.
    I'm getting inconsistent results from SDO_RELATE results when querying 2.5D data. Some geometries I expect to be OVERLAPBDYDISJOINT, are not always being returned by SDO_RELATE when using the OVERLAPBDYDISJOINT mask. It seems that the order of the tables makes a difference to the result.
    Here's a table with one 2.5D geometry and a 2D index:
    CREATE TABLE TEST1 (
    ID                NUMBER PRIMARY KEY,
    GEOMETRY     SDO_GEOMETRY);
    INSERT INTO TEST1 (id, geometry) VALUES (
    1,
    SDO_GEOMETRY(3002, 2157, NULL, SDO_ELEM_INFO_ARRAY(1, 2, 1), SDO_ORDINATE_ARRAY(561695.935, 834005.726, 25.865,
    561696.229, 834005.955, 25.867, 561686.278, 834015.727, 26.088, 561685.179, 834019.771, 26.226, 561680.716, 834022.389, 26.226,
    561674.434, 834025.125, 26.171, 561671.963, 834032.137, 25.667, 561670.832, 834037.185, 25.619, 561667.946, 834042.976, 25.84,
    561666.717, 834047.218, 26.171, 561664.229, 834051.781, 26.778, 561660.041, 834055.935, 26.64, 561657.514, 834061.742, 26.53,
    561658.59, 834067.116, 27.882, 561657.67, 834070.739, 28.821, 561653.028, 834073.777, 29.042, 561653.234, 834078.769, 28.379,
    561658.336, 834080.105, 29.511, 561664.582, 834079.468, 31.94, 561669.257, 834075.821, 33.707, 561672.716, 834074.456, 33.707,
    561676.875, 834077.262, 33.735, 561675.868, 834081.55, 33.707, 561673.131, 834087.641, 33.679, 561672.208, 834093.502, 33.238,
    561668.578, 834100.894, 33.735, 561666.013, 834106.399, 33.679, 561661.408, 834111.23, 33.514, 561654.854, 834117.181, 33.486,
    561651.695, 834122.292, 33.569, 561649.112, 834128.847, 33.431, 561645.982, 834134.786, 33.293, 561642.485, 834141.235, 33.072,
    561642.138, 834150.085, 33.293, 561646.072, 834159.721, 36.578, 561647.274, 834165.532, 37.02, 561646.359, 834170.867, 37.02,
    561645.42, 834175.485, 36.799, 561642.44, 834180.977, 36.826, 561638.677, 834185.419, 36.771, 561636.693, 834194.824, 37.158,
    561635.462, 834202.105, 37.241, 561631.998, 834208.745, 37.268, 561628.871, 834213.994, 37.241, 561627.554, 834220.393, 37.82,
    561625.79, 834226.697, 39.532, 561620.561, 834236.494, 39.891, 561619.265, 834249.687, 39.697, 561619.883, 834260.02, 41.326,
    561620.977, 834264.399, 43.093, 561622.557, 834270.723, 43.452, 561622.172, 834276.978, 43.452, 561621.347, 834285.541, 43.479,
    561622.214, 834292.055, 43.645, 561619.718, 834302.583, 43.755, 561616.762, 834316.47, 43.755, 561608.842, 834328.241, 43.7,
    561606.346, 834334.93, 43.7, 561605.27, 834341.929, 43.7, 561603.925, 834350.648, 43.728, 561602.462, 834358.405, 43.838,
    561599.552, 834366.629, 44.031, 561594.551, 834374.291, 43.396, 561590.644, 834383.986, 43.065, 561588.48, 834392.21, 44.942,
    561586.923, 834397.32, 46.737, 561584.608, 834402.898, 49.299, 561581.389, 834410.194, 50.077, 561580.437, 834419.49, 51.907,
    561580.438, 834427.63, 53.127, 561582.245, 834433.389, 55.791, 561586.664, 834433.397, 57.503, 561593.88, 834433.608, 57.475,
    561596.305,834439.653, 57.42, 561591.804, 834445.862, 57.309, 561589.097, 834447.689, 57.014)));
    SELECT sdo_geom.validate_geometry_with_context(geometry, 0.0005) FROM TEST1;
    DELETE FROM user_sdo_geom_metadata WHERE table_name = 'TEST1' AND column_name = 'GEOMETRY';
    INSERT INTO user_sdo_geom_metadata VALUES ('TEST1','GEOMETRY', 
         MDSYS.SDO_DIM_ARRAY(
         MDSYS.SDO_DIM_ELEMENT('X',400000,750000,0.0005),
         MDSYS.SDO_DIM_ELEMENT('Y',500000,1000000,0.0005),      
         MDSYS.SDO_DIM_ELEMENT('Z',-10000,10000,0.0005)     
    ), 2157);
    DROP INDEX TEST1_SPIND;
    CREATE INDEX TEST1_SPIND ON TEST1(GEOMETRY) INDEXTYPE IS MDSYS.SPATIAL_INDEX PARAMETERS ('layer_gtype=line sdo_indx_dims=2');And here's another table with a 2D geometry and a 2D index:
    CREATE TABLE TEST2 (
    ID                NUMBER PRIMARY KEY,
    GEOMETRY     SDO_GEOMETRY);
    INSERT INTO TEST2 (id, geometry) VALUES (
    1,
    SDO_GEOMETRY(2002, 2157, NULL, SDO_ELEM_INFO_ARRAY(1, 2, 1), SDO_ORDINATE_ARRAY(561816.516, 834055.581, 561819.504, 834057.173,
    561817.942, 834060.818, 561810.044, 834078.997, 561805.576, 834087.634, 561801.572, 834094.299, 561798.558, 834100.467,
    561796.254, 834107.637, 561793.754, 834115.605, 561794.049, 834123.694, 561793.698, 834130.518, 561792.905, 834138.883,
    561787.867, 834145.772, 561782.544, 834150.548, 561777.707, 834156.53, 561773.945, 834161.32, 561771.061, 834166.957,
    561768.155, 834173.131, 561764.735, 834178.744, 561759.603, 834187.782, 561756.146, 834195.493, 561753.416, 834198.821,
    561754.141, 834205.691, 561756.768, 834209.681, 561757.217, 834216.701, 561753.086, 834232.46, 561744.371, 834254.589,
    561740.936, 834263.001, 561737.198, 834272.208, 561732.231, 834284.915, 561730.52, 834297.01, 561728.339, 834310.053,
    561727.825, 834328.069, 561730.461, 834342.992, 561729.808, 834367.948, 561730.216, 834396.988, 561732.273, 834419.047,
    561732.783, 834424.668, 561731.647, 834432.212, 561731.872, 834439.436, 561731.39, 834449.269, 561732.041, 834462.813,
    561733.583, 834471.926, 561733.229, 834485.049, 561730.868, 834498.462, 561726.379, 834512.59, 561725.776, 834528.932,
    561727.488, 834555.23, 561729.357, 834577.873, 561731.05, 834595.931, 561731.163, 834611.928, 561734.057, 834637.031,
    561732.67, 834636.4, 561725.401, 834633.796, 561721.039, 834632.493, 561718.777, 834632.167, 561710.437, 834632.888,
    561647.929, 834636.658, 561644.963, 834630.085, 561632.796, 834629.813, 561625.553, 834627.647, 561620.473, 834626.711,
    561608.718, 834624.94, 561599.935, 834619.684, 561596.67, 834613.843, 561594.27, 834607.774, 561592.513, 834601.752,
    561591.349, 834593.899, 561597.265, 834584.888, 561595.956, 834571.479, 561595.075, 834556.196, 561593.997, 834539.68,
    561594.316, 834528.071, 561595.261, 834516.44, 561595.538, 834504.804, 561597.227, 834497.417, 561599.3, 834490.416,
    561601.265, 834482.61, 561605.126, 834475.502, 561599.232, 834473.683, 561593.076, 834471.379, 561599.154, 834451.112,
    561589.097, 834447.689, 561591.804, 834445.862, 561596.305, 834439.653, 561593.88, 834433.608, 561582.245, 834433.389,
    561580.438, 834427.63, 561580.437, 834419.49, 561581.389, 834410.194, 561584.608, 834402.898, 561586.923, 834397.32,
    561588.48, 834392.21, 561590.644, 834383.986, 561594.551, 834374.291, 561599.552, 834366.629, 561602.462, 834358.405,
    561603.925, 834350.648, 561605.27, 834341.929, 561606.346, 834334.93, 561608.842, 834328.241, 561616.762, 834316.47,
    561619.718, 834302.583, 561622.214, 834292.055, 561621.347, 834285.541, 561622.172, 834276.978, 561622.557, 834270.723,
    561620.977, 834264.399, 561619.883, 834260.02, 561619.265, 834249.687, 561620.561, 834236.494, 561625.79, 834226.697,
    561627.554, 834220.393, 561628.871, 834213.994, 561631.998, 834208.745, 561635.462, 834202.105, 561636.693, 834194.824,
    561638.677, 834185.419, 561642.44, 834180.977, 561645.42, 834175.485, 561646.359, 834170.867, 561647.274, 834165.532,
    561646.072, 834159.721, 561642.138, 834150.085, 561642.485, 834141.235, 561645.982, 834134.786, 561649.112, 834128.847,
    561651.695, 834122.292, 561654.854, 834117.181, 561661.408, 834111.23, 561666.013, 834106.399, 561668.578, 834100.894,
    561672.208, 834093.502,561673.131, 834087.641, 561675.868, 834081.55, 561676.875, 834077.262, 561672.716, 834074.456,
    561669.257, 834075.821, 561664.582, 834079.468, 561658.336, 834080.105, 561653.234, 834078.769, 561653.028, 834073.777,
    561657.67, 834070.739, 561658.59, 834067.116, 561657.514, 834061.742, 561660.041, 834055.935, 561664.229, 834051.781,
    561666.717, 834047.218, 561667.946, 834042.976, 561670.832, 834037.185, 561671.963, 834032.137, 561674.434, 834025.125,
    561680.716, 834022.389, 561685.179, 834019.771, 561686.278, 834015.727, 561696.229, 834005.955, 561695.935, 834005.726,
    561677.805, 833994.91, 561683.163, 833985.817, 561703.01, 833949.434, 561725.891, 833961.856, 561744.35, 833971.197,
    561768.396, 833983.86, 561777.842, 833988.883, 561798.333, 833999.743, 561797.243, 834005.725, 561783.574, 834040.515,
    561798.127, 834046.391, 561807.001, 834050.509, 561816.516, 834055.581)));
    SELECT sdo_geom.validate_geometry_with_context(geometry, 0.0005) FROM TEST2;
    DELETE FROM user_sdo_geom_metadata WHERE table_name = 'TEST2' AND column_name = 'GEOMETRY';
    INSERT INTO user_sdo_geom_metadata VALUES ('TEST2','GEOMETRY', 
         MDSYS.SDO_DIM_ARRAY(
         MDSYS.SDO_DIM_ELEMENT('X',400000,750000,0.0005),
         MDSYS.SDO_DIM_ELEMENT('Y',500000,1000000,0.0005)
    ), 2157);
    DROP INDEX TEST2_SPIND;
    CREATE INDEX TEST2_SPIND ON TEST2(GEOMETRY) INDEXTYPE IS MDSYS.SPATIAL_INDEX PARAMETERS ('layer_gtype=line sdo_indx_dims=2');Now if I check how these two geometries relate to each other, the answer is OVERLAPBDYDISJOINT, which makes sense when inspecting the geometries.
    SQL> SELECT
      2  sdo_geom.relate(t1.geometry, 'determine', t2.geometry,  0.0005) relate_1_to_2,
      3  sdo_geom.relate(t2.geometry, 'determine', t1.geometry,  0.0005) relate_2_to_1
      4  FROM test2 t2, test1 t1
      5  WHERE t1.id = t2.id;
    RELATE_1_TO_2        RELATE_2_TO_1
    OVERLAPBDYDISJOINT   OVERLAPBDYDISJOINT
    1 row selected.So, I'd expect this query to return something...
    SELECT /*+ ORDERED */ t1.id, t2.id, sdo_geom.relate(t1.geometry, 'determine', t2.geometry,  0.0005) relate
    FROM test2 t2, test1 t1
    WHERE sdo_relate(t1.geometry, t2.geometry, 'mask=overlapbdydisjoint') = 'TRUE'
    AND t1.id = 1
    AND t2.id = 1;Nada. And this...
    SELECT /*+ ORDERED */ t1.id, t2.id, sdo_geom.relate(t1.geometry, 'determine', t2.geometry,  0.0005) relate
    FROM test1 t1, test2 t2
    WHERE sdo_relate(t2.geometry, t1.geometry, 'mask=overlapbdydisjoint') = 'TRUE'
    AND t1.id = 1
    AND t2.id = 1;Nada.
    And this...
    SQL> SELECT /*+ ORDERED */ t1.id, t2.id, sdo_geom.relate(t1.geometry, 'determine', t2.geometry,  0.0005) relate
      2  FROM test2 t2, test1 t1
      3  WHERE sdo_relate(t2.geometry, t1.geometry, 'mask=overlapbdydisjoint') = 'TRUE'
      4  AND t1.id = 1
      5  AND t2.id = 1;
            ID         ID RELATE
             1          1 OVERLAPBDYDISJOINT
    1 row selected.This version gives the right answer.
    Can anyone explain this?

    Hi,-
    I think you are running into these bugs 7158518 and 7710726.
    Could you please request the patch for the bugs so that they are published on Metalink
    if they dont exist there?
    Please let us know if these fix your problem.
    Best regards
    baris

  • Unable to access the data from Data Management Gateway: Query timeout expired

    Hi,
    Since 2-3 days the data refresh is failing on our PowerBI site. I checked below:
    1. The gateway is in running status.
    2. Data source is also in ready status and test connection worked fine too.
    3. Below is the error in System Health -
    Failed to refresh the data source. An internal service error has occurred. Retry the operation at a later time. If the problem persists, contact Microsoft support for further assistance.        
    Error code: 4025
    4. Below is the error in Event Viewer.
    Unable to access the data from Data Management Gateway: Query timeout expired. Please check 1) whether the data source is available 2) whether the gateway on-premises service is running using Windows Event Logs.
    5. This is the correlational id for latest refresh failure
    is
    f9030dd8-af4c-4225-8674-50ce85a770d0
    6.
    Refresh History error is –
    Errors in the high-level relational engine. The following exception occurred while the managed IDataReader interface was being used: The operation has timed out. Errors in the high-level relational engine. The following exception occurred while the
    managed IDataReader interface was being used: Query timeout expired. 
    Any idea what could have went wrong suddenly, everything was working fine from last 1 month.
    Thanks,
    Richa

    Never mind, figured out there was a lock on SQL table which caused all the problems. Once I released the lock it PowerPivot refresh started working fine.
    Thanks.

  • Error when executing a query on master data

    Hi Friends,
    When I execute a query fon Master Data Characteristic infoObject ( 0BPARTNER ) from BEx Analyzer I got the below error. 0BPARTNER contains 15 attributes. I am getting the below error only for this query. Rest all other queries are working good with the same BEx Analyzer.
    <b>An error occured in the communication with the BW Server.
    Due to this connection has to be closed.</b>
    <b>Detailed Description:
    The system is configured incorrectly.</b>
    Please tell me what could be the problem? How to overcome this?
    Thanks,
    Sasi

    Hi Arun,
    Before the execution of query I did that. And it was ' Query is Correct'.
    Any more ideas?
    Thanks,
    Sasi.

  • Receiving OCI/ORA-27163 when querying XML data in 11g.

    When querying a table with a column stored in XML format, we get this error for large XML values. This works in Oracle 10g but this error happens when the same data is queried in an 11.2.0.3 instance. I have had a support request open with Oracle for about a month and so far it has not made any progress. What we've seen are the following:
    1. The xml data itself is good.
    2. The good xml data when queried with a wholly 10g client/server environment, works.
    3. The same good xml data when inserted into a table in an 11g instance through a stored package run from the 11g instance (server to server) gets the 27163 error.
    4. The same xml data when queried in a 10g instance from an 11g client fails with the 27163 error.
    5. If we disable the 11g XML parser by setting an internal event (alter session set events='31156 trace name context forever, level 0x400';), then run the same code that was run in item 3 above, it does not get the 27163 error. However, the xml loaded to the 11g instance can then only be queried without getting the 27163 error if we use an older 10g client. An 11g client consistently gets the error.
    The server versions are 11.2.0.3 (upgraded from 10.2.0.4). The SQL*Net client versions that we have tested on are 11.2.0.2 and 11.2.0.3 (both produce the 27163 error).
    With considerable trial and error, we found that it is some combination of the file size and format that causes the error. Copied below is the smallest sample data of a failing XML that will produce this erro (about 8K). Remove any single character even in an a comment and the file parses successfully – but it’s not the file size alone as the original 800+KB file where we first dicovered this problem will process if the period in the attribute: @value=”${item.id}” is removed.
    We are at our wit's end, and with an 11g migration project looming, any ideas anyone can suggest would be very helpful.
    Thanks,
    Joe
    Here is a test case to play around with. Sorry I don't have a way to upload a zip file for this, but you can cut & paste from the post:
    1. CREATE TABLE my_xml_test
    (record_id NUMBER(4,0),
    xml SYS.XMLTYPE,
    comments VARCHAR2(200))
    ALTER TABLE my_xml_test
    ADD CONSTRAINT my_xml_test_pk PRIMARY KEY (record_id)
    USING INDEX
    2. mkdir SampleData
    3. cd into SampleData, copy the XML I will post in the first reply to this message into a new text document called xml_items_removed.xml.
    4. cd .. and create a file called 20120112_11g_bad_xml_issue.txt with this in it:
    1, .\SampleData\xml_items_removed.xml,"Fails: OriginalXML w/ all instances of <Item> removed (count 600) -- reduces file size to ca. 55KB but still fails (Saved by XMLSpy)"
    5. Next create your SQL*Loader control file (call it my_xml_test.ctl):
    LOAD DATA
    APPEND INTO TABLE my_xml_test
    FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
    TRAILING NULLCOLS
    record_id,
    ext_fname FILLER CHAR(200),
    xml LOBFILE(ext_fname) TERMINATED BY EOF,
    comments
    6. sqlldr <username>/<password>@<database> control=my_xml_test.ctl data=20120112_11g_bad_xml_issue.txt log=20120112_11g_bad_xml_issue.log bad=20120112_11g_bad_xml_issue.bad
    7. Once the data is loaded, query my_xml_test table that you created in step 1. You should get the 27163 error:
    SELECT a.record_id,
    comments,
    a.XML,
    length(a.XML.GetClobVal()) clob_length
    FROM my_xml_test a

    <?xml version="1.0" encoding="UTF-8"?>
    <!-- edited with XMLSpy v2012 sp1 (http://www.altova.com) by Martin l (National Board of Medical Examiners) -->
    <IRData application="Item Review" version="1.3" itempool="2006001" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
         <ViewerTypes>
              <Viewer id="its-viewer">
                   <Title>Item Text</Title>
                   <Description>Item Text</Description>
                   <BaseUrl>https://www.starttest.com/flex/4.3.0.0/InstitutionViewItem.aspx</BaseUrl>
                   <Parameters>
                        <Parameter name="pid" value="MSST"/>
                        <Parameter name="iid" value="01123"/>
                        <Parameter name="username" value="SomeUser"/>
                        <Parameter name="item" value="${item.id}"/>
                        <Parameter name="code" value="${CODE}"/>
                   </Parameters>
              </Viewer>
         </ViewerTypes>
         <Filters>
              <Filter id="drop-single-filter">
                   <AttributeSource>drop</AttributeSource>
                   <ControlType>select-1-drop-down-plus-all</ControlType>
              </Filter>
              <Filter id="onoff-single-filter">
                   <AttributeSource>onoff</AttributeSource>
                   <ControlType>select-1-drop-down-plus-all</ControlType>
              </Filter>
              <Filter id="reviewed-single-filter">
                   <AttributeSource>reviewed</AttributeSource>
                   <ControlType>select-1-drop-down-plus-all</ControlType>
              </Filter>
              <Filter id="build-multi-filter">
                   <AttributeSource>build</AttributeSource>
                   <ControlType>select-1-drop-down-plus-all</ControlType>
              </Filter>
              <Filter id="flag-multi-filter">
                   <AttributeSource>flag</AttributeSource>
                   <ControlType>select-1-drop-down-plus-all</ControlType>
              </Filter>
              <Filter id="u1dt-multi-filter">
                   <AttributeSource>U1DT</AttributeSource>
                   <ControlType>select-1-drop-down-plus-all</ControlType>
              </Filter>
              <Filter id="fb2-multi-filter">
                   <AttributeSource>FB2</AttributeSource>
                   <ControlType>select-1-drop-down-plus-all</ControlType>
              </Filter>
              <Filter id="fb1t-multi-filter">
                   <AttributeSource>FB1T</AttributeSource>
                   <ControlType>select-1-drop-down-plus-all</ControlType>
              </Filter>
              <Filter id="scorecat-multi-filter">
                   <AttributeSource>scorecat</AttributeSource>
                   <ControlType>select-1-drop-down-plus-all</ControlType>
              </Filter>
              <Filter id="images-multi-filter">
                   <AttributeSource>PIXN</AttributeSource>
                   <ControlType>select-1-drop-down-plus-all</ControlType>
              </Filter>
              <Filter id="vignettes-multi-filter">
                   <AttributeSource>VIG1</AttributeSource>
                   <ControlType>select-1-drop-down-plus-all</ControlType>
              </Filter>
         </Filters>
         <DataPanels>
              <Control displayed="true">
                   <Filters>
                        <Filter>
                             <Description>Drop</Description>
                             <FilterSource>drop-single-filter</FilterSource>
                        </Filter>
                        <Filter>
                             <Description>Version</Description>
                             <FilterSource>build-multi-filter</FilterSource>
                        </Filter>
                        <Filter>
                             <Description>On/Off Test</Description>
                             <FilterSource>onoff-single-filter</FilterSource>
                        </Filter>
                        <Filter>
                             <Description>Review status</Description>
                             <FilterSource>reviewed-single-filter</FilterSource>
                        </Filter>
                        <Filter>
                             <Description>Flag</Description>
                             <FilterSource>flag-multi-filter</FilterSource>
                        </Filter>
                        <Filter>
                             <Description>Discipline</Description>
                             <FilterSource>fb1t-multi-filter</FilterSource>
                        </Filter>
                        <Filter>
                             <Description>Organ System</Description>
                             <FilterSource>u1dt-multi-filter</FilterSource>
                        </Filter>
                        <Filter>
                             <Description>Step 1 outline</Description>
                             <FilterSource>fb2-multi-filter</FilterSource>
                        </Filter>
                        <Filter>
                             <Description>Images</Description>
                             <FilterSource>images-multi-filter</FilterSource>
                        </Filter>
                        <Filter>
                             <Description>Vignettes</Description>
                             <FilterSource>vignettes-multi-filter</FilterSource>
                        </Filter>
                        <Filter>
                             <Description>Score categories</Description>
                             <FilterSource>scorecat-multi-filter</FilterSource>
                        </Filter>
                   </Filters>
              </Control>
              <ItemList displayed="true">
                   <Attributes>
                        <Attribute displayed="true">
                             <Description>Item</Description>
                             <AttributeSource>itemref</AttributeSource>
                             <ControlType>text-display</ControlType>
                        </Attribute>
                        <Attribute displayed="true">
                             <Description>On test</Description>
                             <AttributeSource>onoff</AttributeSource>
                             <ControlType>checkbox-select</ControlType>
                        </Attribute>
                        <Attribute displayed="true">
                             <Description>Drop</Description>
                             <AttributeSource>drop</AttributeSource>
                             <ControlType>checkbox-select</ControlType>
                        </Attribute>
                        <Attribute displayed="true">
                             <Description>Reviewed</Description>
                             <AttributeSource>reviewed</AttributeSource>
                             <ControlType>check-display</ControlType>
                        </Attribute>
                        <Attribute displayed="true">
                             <Description>Flag</Description>
                             <AttributeSource>flag</AttributeSource>
                             <ControlType>select-1-drop-down</ControlType>
                        </Attribute>
                        <Attribute displayed="true">
                             <Description>Description</Description>
                             <AttributeSource>DESCR</AttributeSource>
                             <ControlType>text-display</ControlType>
                        </Attribute>
                        <Attribute displayed="true">
                             <Description>Discipline</Description>
                             <AttributeSource>FB1T</AttributeSource>
                             <ControlType>text-display</ControlType>
                        </Attribute>
                        <Attribute displayed="true">
                             <Description>Organ System</Description>
                             <AttributeSource>U1DT</AttributeSource>
                             <ControlType>text-display</ControlType>
                        </Attribute>
                        <Attribute displayed="true">
                             <Description>Step 1 outline</Description>
                             <AttributeSource>FB2</AttributeSource>
                             <ControlType>text-display</ControlType>
                        </Attribute>
                        <!-- Attribute displayed="false">                         <Description>Organ System plus</Description>                         <AttributeSource>ORGA</AttributeSource>                         <ControlType>text-display</ControlType>                    </Attribute -->
                        <Attribute displayed="true">
                             <Description>Images</Description>
                             <AttributeSource>PIXN</AttributeSource>
                             <ControlType>text-display</ControlType>
                        </Attribute>
                        <Attribute displayed="true">
                             <Description>Vignettes</Description>
                             <AttributeSource>VIG1</AttributeSource>
                             <ControlType>text-display</ControlType>
                        </Attribute>
                        <Attribute displayed="true">
                             <Description>Diff</Description>
                             <AttributeSource>pvalue</AttributeSource>
                             <ControlType>text-display</ControlType>
                        </Attribute>
                        <Attribute displayed="true">
                             <Description>Discr</Description>
                             <AttributeSource>rbvalue</AttributeSource>
                             <ControlType>text-display</ControlType>
                        </Attribute>
                        <Attribute displayed="true">
                             <Description>Answer Key</Description>
                             <AttributeSource>key</AttributeSource>
                             <ControlType>text-display</ControlType>
                        </Attribute>
                        <Attribute displayed="true">
                             <Description>Notes</Description>
                             <AttributeSource>notes</AttributeSource>
                             <ControlType>text-display</ControlType>
                        </Attribute>
                   </Attributes>
              </ItemList>
              <CurrentItemInfo>
                   <ItemText displayed="true"/>
                   <ItemData>
                        <Attribute>
                             <DataSource>scorecat</DataSource>
                             <ControlType>select-each-value</ControlType>
                        </Attribute>
                   </ItemData>
              </CurrentItemInfo>
         </DataPanels>
    </IRData>
    <?BASELINE FB2-A.02.01 on:4 off:3?>
    <?BASELINE FB2-A.02.02 on:4 off:4?>
    <?BASELINE FB2-A.03.01 on:3 off:1?>
    <?BASELINE FB2-A.03.02 on:0 off:1?>
    <?BASELINE FB2-A.03.04 on:3 off:2?>
    <?BASELINE FB2-A.03.05 on:1 off:0?>
    <?BASELINE FB2-A.03.07 on:4 off:2?>
    <?BASELINE FB2-A.04.01 on:2 off:5?>
    <?BASELINE FB2-A.04.02 on:5 off:3?>
    <?BASELINE FB2-A.04.03 on:3 off:1?>
    <?BASELINE FB2-A.04.04 on:1 off:1?>
    <?BASELINE FB2-A.05.01 on:12 off:14?>
    <?BASELINE FB2-A.05.02 on:9 off:1?>
    <?BASELINE FB2-A.05.03 on:0 off:6?>
    <?BASELINE FB2-A.06.01 on:7 off:9?>
    <?BASELINE FB2-A.06.02 on:2 off:6?>
    <?BASELINE FB2-A.06.03 on:10 off:5?>
    <?BASELINE FB2-A.06.04 on:1 off:1?>
    <?BASELINE FB2-A.07 on:23 off:20?>

  • Year getting reduced by 1, when querying date fields

    Hi, I am an Oracle retail support engineer. One of my clients logged an SR with the following:
    When querying application table (shown below), the date comes out as corrupted (data snippet also shown below):
    Query:
    select create_date, to_char(create_date,'DD-MON-RRRR') from shipment_table;
    Result:
    CREATE_DATE TO_CHAR(CREATE_DATE,'DD-MON-RRRR')
    08-JAN-10     08-JAN-2009
    08-JAN-10     08-JAN-2009
    08-JAN-10     08-JAN-2009
    08-JAN-10     08-JAN-2009
    08-JAN-10     08-JAN-2009
    Any idea, what can cause this? Any help is greatly appreciated.
    Thanks
    Srinivas

    Unfortunately, i'm not able to produce that error.
    scott@ORCL>
    scott@ORCL>select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    PL/SQL Release 11.1.0.6.0 - Production
    CORE    11.1.0.6.0      Production
    TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
    NLSRTL Version 11.1.0.6.0 - Production
    Elapsed: 00:00:00.11
    scott@ORCL>
    scott@ORCL>
    scott@ORCL>with shipment
      2  as
      3    (
      4      select to_date('08-JAN-10','DD-MON-YY') create_date from dual
      5      union all
      6      select to_date('08-JAN-10','DD-MON-YY') create_date from dual
      7      union all
      8      select to_date('08-JAN-10','DD-MON-YY') create_date from dual
      9      union all
    10      select to_date('08-JAN-10','DD-MON-YY') create_date from dual
    11      union all
    12      select to_date('08-JAN-10','DD-MON-YY') create_date from dual
    13    )
    14  select create_date,
    15         to_char(create_date,'DD-MON-RRRR') char_date
    16  from shipment;
    CREATE_DA CHAR_DATE
    08-JAN-10 08-JAN-2010
    08-JAN-10 08-JAN-2010
    08-JAN-10 08-JAN-2010
    08-JAN-10 08-JAN-2010
    08-JAN-10 08-JAN-2010
    Elapsed: 00:00:00.04
    scott@ORCL>
    scott@ORCL>What is your Oracle version?
    Regards.
    Satyaki De.

  • Slow response when query v$lock or some data dictionary

    Hello,
    We have a severe problem with our database performance problem. Here is the situation:
    1. When query data dictionary such as v$lock from any Windows machine on the network, the performance is slow (about 3 minutes.)
    2. When query v$lock directly on the server where database reside, the query come back fast (normal).
    3. When query user table such as employees from any windows machine on the network, the performance is normal.
    4. When query user table directly from server where the database reside, the query come back normal.
    Looking at v$session_wait on the slow query, it is waiting on "SQL*Net message to client".
    We have people from our network group, security group and did not find anything problem.
    Thank for you help!!!

    user8175606 wrote:
    Hello,
    We have a severe problem with our database performance problem. Here is the situation:
    1. When query data dictionary such as v$lock from any Windows machine on the network, the performance is slow (about 3 minutes.)
    2. When query v$lock directly on the server where database reside, the query come back fast (normal).
    3. When query user table such as employees from any windows machine on the network, the performance is normal.
    4. When query user table directly from server where the database reside, the query come back normal.
    Looking at v$session_wait on the slow query, it is waiting on "SQL*Net message to client".
    We have people from our network group, security group and did not find anything problem.
    Thank for you help!!!I saw the same problem several times. All of them were related with wrong execution plan in the v$lock (and dba_locks, and others based on v$lock). For instance due to "crazy" optimizer mode FIRST_ROWS.
    Also look at note 431770.1.
    I suggest that when you looked at v$session_wait and saw the event "SQL*Net message to client" state of the event was not = WAITING. Please check. It means that Oracle is not waiting but is burning CPU.
    Thus, please let us know - does another optimizer mode improve performance?
    select /*+ all_rows */ * from v$lock;
    select /*+ rule */ * from v$lock;

  • Sql query to bind data from grid and print total count and amount total when date changes

    SELECT SLHD.VOUCH_CODE,SLHD.VOUCH_DATE,SLHD.VOUCH_NUM,SUM(SLTXN.CALC_NET_AMT) AS AMT,ACT.ACT_NAME,SUM(SLTXN.TOT_QTY) AS QTY
    FROM SL_HEAD20132014 AS SLHD,ACCOUNTS AS ACT,SL_TXN20132014 AS SLTXN
    WHERE SLHD.ACT_CODE=ACT.ACT_CODE AND SLTXN.VOUCH_CODE=SLHD.VOUCH_CODE
    GROUP BY SLHD.VOUCH_CODE,SLHD.VOUCH_DATE,SLHD.VOUCH_NUM,ACT.ACT_NAME
    ORDER BY SLHD.VOUCH_DATE 
    i want to print total quatity and total sale in grid when data changes
    like
    date amount quantity
    01/02/2013 1200 1
    01/02/2013  200 1
    01/02/2013  1400 2 // date changes here 
    02/03/2013 100 1 
    02/03/2013 50 4
    02/03/2013 150 5 // date changes and so on

    this query only print all the data from table i want total quantity and total amount of daily sale in same grid when ever date changes
    You may add the date filter to Visakh's query:
    SELECT SLHD.VOUCH_DATE,SUM(SLTXN.CALC_NET_AMT) AS AMT,SUM(SLTXN.TOT_QTY) AS QTY
    FROM SL_HEAD20132014 AS SLHD,ACCOUNTS AS ACT,SL_TXN20132014 AS SLTXN
    WHERE SLHD.ACT_CODE=ACT.ACT_CODE AND SLTXN.VOUCH_CODE=SLHD.VOUCH_CODEand SLHD.VOUCH_DATE = @yourdate --passed from the front end application
    GROUP BY SLHD.VOUCH_DATE
    WITH CUBE
    ORDER BY SLHD.VOUCH_DATE
    Having said, each time when you select the date, you query the table would be expensive method. May be you can filter the date within your dataset already populated if you have entire data in the dataset.

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Powerpivot Report 2010 - OLE DB or ODBC error: Query timeout expired; HYT00

    0
    I have report already developed by someone in powerpivot 2010 and deployed to sharepoint in powerpivot gallery library. The data source for this report is SQL Server 2008 r2. I have set the refresh options in sharepoint to run after business hours during
    week end. When I check the refresh history it shows this error message "OLE DB or ODBC error: Query timeout expired; HYT00. An error occurred while processing the 'xxx' table. The current operation was cancelled because another operation in the
    transaction failed..
    I have tried to run the "Also refresh as soon as possible " just to check but same error message is shown.
    This is the detailed error message details:
    03/02/2014 21:06:55 03/02/2014 22:08:15 01:01:20 Failed
    OLE DB or ODBC error: Query timeout expired; HYT00. An error occurred while processing the 'xxx' table. The current operation was cancelled because another operation in the transaction failed
    Could anyone guide me on this issue? Truly appreciate your help in advance.

    Hi asritha,
    I would suggest you take a look at the following articles regarding how to configure PowerPivot data refresh in SharePoint Server:
    Configure and Use Stored Credentials for PowerPivot Data Refresh:
    http://technet.microsoft.com/en-us/library/ee210671(v=sql.105).aspx
    Configure and Use the PowerPivot Unattended Data Refresh Account:
    http://technet.microsoft.com/en-us/library/ff773327(v=sql.105).aspx
    Note: Please ensure that the account have sufficient permission access to your data source.
    In addition, please elaborate your PowerPivot data soure with more detail. If you are using SSAS cube, please try to increase "ExternalConnectionTimeout" property default value to see if this helps. Here is the similar thread for your reference,
    please see:
    http://social.technet.microsoft.com/Forums/en-US/35b26c06-9e6d-41e5-ae44-bfb1233510ac/ssas2008-ole-db-error-ole-db-or-odbc-error-query-timeout-expired-hyt00?forum=sqldatamining
    Regards,
    Elvis Long
    TechNet Community Support

  • Session Timeouts accessing data from Xcelsius

    I am running Business Objects XI R2 SP2 and running Xcelsius 2008 FP1.
    Basically within my models i use LiveOffice to refresh the initial data used within the models and then I use QaaWS to use certain drill through parameters to retrieve more detailed information.
    All this works fine and returns all the data i need, but i get session errors when accessing the models from Infoview.
    When i log into Infoview, i can run 1 model fine and do all the necessary refreshes and further drill downs without any trouble, but when i close the model down and try to go into the next model i get the following error message;
    Your webintelligence session has timeed out WIS: 30553.
    NOw i have updated all the setting i know of eg;
    1) Webi report server timeout in server properties
    2) Configured the command line of the webi report server in CCM to include the timeout parameter.
    3) Web.xml file has been amended to increase the session timout
    4) Web.xml file has been amended to uncomment the listener settings
    5) i have even updated the session timeout on the Tomcat server (tomcat\conf\web.xml)
    Has anyone any other ideas as i can not access the models.
    I know this is not related to Webi reports, as i can fresh reports and open up other reports without any issues, so i think it must be related to Xcelsius in soem way.
    Any help will be gratefully received.
    Anthony
    Edited by: Anthony Jones on Mar 12, 2009 12:32 PM

    Hey thanks a lot your code worked!!
         I wrote it in my procedure and ran it through my job.Its working as needed now.
         I analysed what your code does. Let me know if I'm right or wrong
         The view apex_activity_log is based on query like
         SELECT * FROM wwv_flow_activity_log
    WHERE security_group_id = (SELECT wwv_flow.get_sgid
    FROM DUAL
    WHERE ROWNUM = 1);
    What i understand is when i run query in toad it gets null value in above where clause so my below query doesnot retrieve any data.
         But when i set security_group_id it gets some value through wwv_flow.get_sgid and my query retrieves the data for my app_id
         SELECT l.userid,
    a.application_name,
    l.step_id page_no,
    l.TIME_STAMP,
    l.ir_report_id,
    l.elap,
    l.session_id,
    l.ip_address
    FROM apex_activity_log l, apex_applications a
    WHERE l.flow_id = a.application_id
    AND l.flow_id = 200; --app_id
    Can u pls explain me what is security_group_id and its significance??Are workspace_id and security_group_id same??

  • Data Buffer error USER_AUTH_FAILED: User account for logonid "SYSTEM"

    All,  I have the following errors on both the Quality and the Production system in our data buffer job.
    com.sap.security.api.NoSuchUserException: USER_AUTH_FAILED: User account for logonid "SYSTEM" not found!
    These entries will not process because they are generating an error about the loginid for the Username SYSTEM is not found.
    So I am thinking that somehow the MII system is not capturing the correct username when they are being added into the Data Buffer Jobs, or there is something I am overlooking when I set up the databuffering.
    Other entries that were in the data buffer jobs were listed as using the RS1000SVC-QMUSBATCH, RS1630SVC-PMIIBATCH User accounts.  These are the accounts that our scheduled tasks run under.
    Those entries process OK out of the data buffer jobs.
    I did notice a similarity between the data buffer jobs in the quality and production systems as it pertains to the following transactions.
    Production MII ver 12.0.7 (Build 20)
    Muscatine%2FIntegration%2FSAP%2FPROD_CONFIRMED_INPUT_InsertQuery
    Which is called from the MIIC1043_IDOC Message Processing Rule.
    Muscatine%2FIntegration%2FSAP%2FHEADER_InsertQuery
    Which is called from the MIIC1043_Control_Recipe_Download Message Processing Rule.
    Quality MII 12.0.11 (Build 14)
    Muscatine%2FIntegration%2FSAP%2FPROD_CONFIRMED_INPUT_InsertQuery
    Which is called from the MIIC1043_IDOC Message Processing Rule.
    So the commonality is that these transactions are being initaiated by the Message processing rules.
    Are there known issues with data buffering from transactions initiated with Message Processing Rules?
    Is anyone sucessfully using data buffering of transactions called by message processing rules?
    Any help is appreciated.
    Bob

    Jeremy,  Thanks for your reply.
    There doesn't seem to be much detailed information on the use of Catagories with Processing rules in Help or in the forums.  So let me see if I understand your suggestion correctly.
    On the MII server create a processing rule for the message using a category instead of using a transaction,  The message received by the message listener will be placed in a buffer.  I am assuming these messages whould show up in the message monitor and not in the  Data Buffer jobs/entries.
    So in my transaction which normally processes this data I could add logic to access the message data; using the Message Service (Query, Read, Update and Delete) action blocks.  I could pare down the selection by selecting messages based on the MessageCategory that I defined in the message processing rule.   This will allow me to access the stored message data.
    Finally use a scheduled Job to execute the transaction.  The scheduled job would be run with a valid userID and Password so if it connection to the external database failed the enteries would be placed in the data buffer jobs with a valid userID credentials.
    Does this sound like what you had in mind?

  • Error [hyt00] [microsoft][odbc sql server driver] query timeout expired

    I have the below network setup:-
    1. Its a simple network at my father's office at a small town called Ichalkaranji (District - Kolhapur, Maharashtra).
    2. We are using private network range 192.168.1.xxx with two Windows Server 2003 Enterprise Edition with SP2 licensed copies and 15 local Windows 7 clients who are only using Server A.
    3. The network is having a TP-Link Braodband Router Connected to internet with the IP 192.168.1.1.
    4. Both there Windows Server 2003 Enterprise Edition with SP2 are running separate SQL Server 2005 Express with Advanced Services, you can treat them as Server A (Problematic Server with IP of 192.168.1.2) 
    and Server B (this is not having any issue with IP of 192.168.1.3).
    5. Server A is also being used by 6 Remote users from our Kolkata office using DDNS facility through the NO IP client software which installed separately on both the servers. Kolkata remote users
    do not use OR access the Server B.
    6. Server B is being used by only 2 Remote users from our Erode office (Under Salem District, Tamilnadu) using DDNS facility through the NO IP client software which installed separately on both
    the servers. Erode remote users do not use OR access the Server A.
    7. The front end application which running separately on both the servers have been developed in VB by a local vendor at Ichalkaranji (District - Kolhapur, Maharashtra).
    8. Both Servers are having the same database structure in terms of design and tables format. Only difference is that both the servers are being used separately.
    9. This error OR problem is only related to Server A, where on the clients we get the message "error [hyt00] [microsoft][odbc sql server driver] query timeout expired" every now and then.
    10. We have to frequently reboot the server whenever we get this message on the client machines. May be after rebooting every thing works perfectly for 2 hours / 5 Hours / may be one full day but
    the the error will come back for sure.
    11. Current Database back up size on Server A is around 35 GB and take around 1 hour 15 minutes daily to take the back up.
    12. Current Database back up size on Server B is around 3 GB and take around 5 to 10 minutes daily to take the back up.
    13. One thing I have noticed is that when ever we reboot Server A, for some time sqlsrvr.exe file will show memory usage of 200 to 300 MBs but it will start using slowly, what i understand is that
    this is the way SQL Server works.
    14. Both the Servers are also running Quick heal Antivirus Server Edition separate licensed copies also.
    15. Server B is also running Tally ERP 9 Licenses copy which is being used locally by 15 users of Ichalkaranji (District - Kolhapur, Maharashtra) same users
    Can any one help to resolve this issue. Any help will be highly appreciated.

    The error message "query timeout expired" occurs, because by default many APIs, including ODBC only waits for 30 seconds for SQL Server to send any output. If no data has been seen for this period of time, they tell SQL Server to cancel execution
    and return this error to the caller.
    This timeout could be seen as a token that the query is taking too long time to execute and this needs to be fixed. But it can also be a pain in the rear parts, if you are content with a report taking five minutes, because you only run it once a day.
    The simplest way to get rid of the error is to set the timeout to zero, which means "wait forever". This is something your vendor would have to do. This may, however, not resolve the problem, as the users may just find that the application is hanging.
    To wit, there are two reasons why the query takes more than 30 seconds to complete. One is that there is simply that much work to do. This can be reduced by adding indexes or by doing other tuning, if the execution time is not acceptable. The other possibility
    is blocking. That is, there is a process blocking this query from completing. This is much more likely to require attention.
    It is not clear to me, whether the vendor has developed the database part as well. In this case, you should probably call the vendor's support desk to have them to sort this out.
    Finally, I am little puzzled. You say that you are using Express Edition, but one of the databases is 35 GB in size. 35 GB is far above the limit for Express Edition.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Data Buffer Cache Quality

    Hi All,
    Can somebody please please tell some ways in which i can improve the data buffer quality? Presently it is 51.2%. The DB is 10.2.0.2.0
    I want to know, wat all factors do i need to keep in mind if i want to increase DB_CACHE_SIZE?
    Also, i want to know how can i find out Cache Hit ratio?
    Further, i want to know which are the most frequently accessed objects in my DB?
    Thanks and Regards,
    Nick.

    Nick-- wud b DBA wrote:
    Hi Aman,
    Thanks. Can u please give the appropriate query for that?
    And moreover when i'm giving:
    SQL>desc V$SEGMENT-STATISTICS; It is giving the following error:
    SP2-0565: Illegal identifier.
    Regards,
    Nick.LOL dude I put it by mistake. Its dash(-) sign but we need underscore(_) sign.
    About the query, it may vary what you really mean by "most used obect". If you mean to find the object that is undergoing lots of reads,writes than this may help,
    SELECT Rownum AS Rank,
    Seg_Lio.*
    FROM (SELECT St.Owner,
    St.Obj#,
    St.Object_Type,
    St.Object_Name,
    St.VALUE,
    'LIO' AS Unit
    FROM V$segment_Statistics St
    WHERE St.Statistic_Name = 'logical reads'
    ORDER BY St.VALUE DESC) Seg_Lio
    WHERE Rownum <= 10
    UNION ALL
    SELECT Rownum AS Rank,
    Seq_Pio_r.*
    FROM (SELECT St.Owner,
    St.Obj#,
    St.Object_Type,
    St.Object_Name,
    St.VALUE,
    'PIO Reads' AS Unit
    FROM V$segment_Statistics St
    WHERE St.Statistic_Name = 'physical reads'
    ORDER BY St.VALUE DESC) Seq_Pio_r
    WHERE Rownum <= 10
    UNION ALL
    SELECT Rownum AS Rank,
    Seq_Pio_w.*
    FROM (SELECT St.Owner,
    St.Obj#,
    St.Object_Type,
    St.Object_Name,
    St.VALUE,
    'PIO Writes' AS Unit
    FROM V$segment_Statistics St
    WHERE St.Statistic_Name = 'physical writes'
    ORDER BY St.VALUE DESC) Seq_Pio_w
    WHERE Rownum <= 10; But if you are looking for the objects which are most highly in the waits than this query may help
    select * from
       select
          DECODE
          (GROUPING(a.object_name), 1, 'All Objects', a.object_name)
       AS "Object",
    sum(case when
       a.statistic_name = 'ITL waits'
    then
       a.value else null end) "ITL Waits",
    sum(case when
       a.statistic_name = 'buffer busy waits'
    then
       a.value else null end) "Buffer Busy Waits",
    sum(case when
       a.statistic_name = 'row lock waits'
    then
       a.value else null end) "Row Lock Waits",
    sum(case when
       a.statistic_name = 'physical reads'
    then
       a.value else null end) "Physical Reads",
    sum(case when
       a.statistic_name = 'logical reads'
    then
       a.value else null end) "Logical Reads"
    from
       v$segment_statistics a
    where
       a.owner like upper('&owner')
    group by
       rollup(a.object_name)) b
    where (b."ITL Waits">0 or b."Buffer Busy Waits">0)This query's reference:http://www.dba-oracle.com/t_object_wait_v_segment_statistics.htm
    So it would depend upon that on what ground you want to get the objects.
    About the cache increase, are you seeing any wait events related to buffer cache or DBWR in the statspack report?
    HTH
    Aman....

  • Error when writing to the Internal Buffer-runtime error during querydisplay

    Hi
              I am working in BW 3.5. I am trying to run a query in BEx Analyzer but I am getting the following Error
    Error when writing in the Internal Buffer see OSS note 156957
    I tried for the note but I am not getting relevant information.
    Please let me know what to do.
    Thanks,
    Padma

    Hi,
    please apply the note 1614788. After note implementation, you need to set parameter RSR_HIER_THRESHOLD_EXPORTDB in table RSADMIN via report SAP_RSADMIN_MAINTAIN in transaction SE38.
    If you set this parameter to value 50.000 for example, all hierarchies with more than 50.000 nodes will be buffered in the database instead of using the export/import buffer.
    To find out the current size of a hierarchy, goto the hierarchy maintenance in transaction RSH1 and copy the hierarchie ID from the header data. With this key you can goto transaction SE16 and select table RSRHIEDIR_OLAP (Field HIEID is the hierarchy ID).
    It should resolved your issue.
    Thanks,
    Venkat

Maybe you are looking for

  • Images that zoom and move jitter after render.

    I have a bunch of pictures that I put to music. Some move across the screen or zoom slowly. The images move smoothly until I render them, then it's like the images are on some bumpy path or something. They jitter or bump as they move. I have de-inter

  • Generating table with large number of columns (256)

    Hi, I don't know if this is right place for posting this: for data mining purposes I need a table which column names needs to be size N, where N is 2, 3 or 4. Column name is build upon alphabet of nucleotids A,C,T,G and all variations with repetition

  • Bbm installed but not on the phone

    hi i need help. yesrerday i updated bbm on my 9810 torch, after the phone restarted i couldnt find it. when i go to app word it shows as installed but i can not uninstall it.  please help as to how i can solve this problem thanks

  • About Version compatibility and Forms Migration

    Hello, Like to know ... 1> whether i can use Forms 10g/Reports 10g with Oracle 9i rel.2 2> will there be any problem with any application.i want to migrate my application in Forms 6i to Forms 10g.My application was running in client-server mode with

  • How the profit center wiil detrmine in sales order

    How the Profit center will determine in sales order where we will assign profit center to sales org is it right that it will determine from Material master Thanks and regards Kishore