Fetching query result for editbox

Hello to everyone. I just started developing Windows application (after 2 years pause). So, I'm working 'with' MFC Dialog based app.
I want to fill combo box (or edit box it doesn't matter) with MySQL query result. I have these codes:
void CConnectToMySQLDlg::OnWriteToDatabase()
CString get, set;
m_edit.GetWindowText(get);
CT2CA temp_string(get);
string s_get(temp_string);
if (m_edit.GetWindowTextLength() != 0) // if you can please write this line better.
MySQL_Driver *driver;
Connection *dbConn;
Statement *statement;
ResultSet *result;
driver = get_mysql_driver_instance();
dbConn = driver->connect("tcp://127.0.0.1:3306", "root", "connection");
dbConn->setSchema("world");
statement = dbConn->createStatement();
result = statement->executeQuery(s_get);
//CStringArray comboArray;
/*comboArray.Add(_T("bir"));
comboArray.Add(_T("iki"));
comboArray.Add(_T("uc"));
comboArray.Add(_T("dord"));
comboArray.Add(_T("bes"));
comboArray.Add(_T("ffsfsfs"));
comboArray.Add(_T("bfsfsfsdes"));
comboArray.Add(_T("bczxczxes"));
comboArray.Add(_T("aadafdsbes"));
comboArray.Add(_T("zczxawfebes"));*/
//comboMysql.ResetContent();
string s_set = "string";
set.Format(_T("%S"), s_set.c_str());
while (result->next())
// here how can I set edit box value?
// So, how can I convert
AfxMessageBox(L"dsadasd");
delete result;
delete statement;
delete dbConn;
else // handling user input
AfxMessageBox(L"not allowed");
I can get input stream and successfully(1) execute it but I don't know how to set its result(s).
(1) --- In this case I entered this SQL line:
select ID, Name from city where ID = 1 or ID = 2 or ID = 3 or ID = 4
and it alerted me 4 times.
p.s. If there any problematic codes please tell me the correct variant(s).
Best regards,
Mirjalal.

I solved my problem. I pasted working codes below.
void CmysqlDlg::OnBnClickedButton1()
CString get, set, ComboString;
edit.GetWindowText(get);
CT2CA temp_string(get);
std::string s_get(temp_string);
if (edit.GetWindowTextLength() != 0) // if you can please write this line better.
MySQL_Driver *driver;
Connection *dbConn;
Statement *statement;
ResultSet *result;
driver = get_mysql_driver_instance();
dbConn = driver->connect("tcp://127.0.0.1:3306", "root", "connection");
dbConn->setSchema("world");
statement = dbConn->createStatement();
result = statement->executeQuery(s_get); // executes the user "input"
comboBox.ResetContent();
// ofstream y("text.txt"); // actually i don't need this anymore :P
while (result->next())
string name_string = result->getString("Name"); // Name is the my tbl column name
CString cs_name_string(name_string.c_str());
comboBox.AddString(cs_name_string);
delete result;
delete[] result;
delete statement;
delete[] statement;
delete dbConn;
delete[] dbConn;
else // handling user input
AfxMessageBox(L"olmaz");
Thanks.
Mirjalal

Similar Messages

  • Slow in Fetching Query Results

    Hi guys, we have been experiencing a serious performance problem with our Oracle database application. The appplication is a C++ app running on a Windows box, queries an Oracle 9.2 database running on another Windows box, through Oracle 9.2 ODBC driver. What we experienced is that the fetch of query results takes significant length of time - around 1.7 seconds for fetching 3000 result rows on a 2.4Ghz dual processor box, whereas the same app/query takes only 50 milliseconds for fetching the same number of result rows. Do you guys have any idea where I should go to track down the problem? The Oracle database configuration? The ODBC driver? Or our C++ database access code?

    Yes, all tables involved in the query have 2 to 3 indexes.
    I have re-computed statistics on the tables and changed the optimizer_mode to ALL_ROWS. In the following I am attaching the init parameters of my database and the new execution plan
    =========================================================
    aq_tm_processes = 1
    background_dump_dest = C:\oracle\admin\MBTuning\bdump
    compatible = 9.2.0.0.0
    control_files = ('C:\oracle\oradata\MBTuning\CONTROL01.CTL', 'C:\oracle\oradata\MBTuning\CONTROL02.CTL', 'C:\oracle\oradata\MBTuning\CONTROL03.CTL')
    core_dump_dest = C:\oracle\admin\MBTuning\cdump
    db_block_size = 8192
    db_cache_size = 134217728
    db_domain = ''
    db_file_multiblock_read_count = 16
    db_name = MBTuning
    dispatchers = '(PROTOCOL=TCP) (SERVICE=MBTuningXDB)'
    fast_start_mttr_target = 300
    hash_join_enabled = TRUE
    instance_name = MBTuning
    java_pool_size = 33554432
    job_queue_processes = 10
    large_pool_size = 67108864
    log_archive_dest_1 = 'LOCATION=C:\oracle\ora92\RDBMS'
    open_cursors = 300
    optimizer_features_enable = 9.2.0
    optimizer_index_caching = 100
    optimizer_mode = ALL_ROWS
    pga_aggregate_target = 134217728
    processes = 150
    query_rewrite_enabled = FALSE
    remote_login_passwordfile = EXCLUSIVE
    sga_max_size = 403774644
    shared_pool_size = 67108864
    sort_area_size = 524288
    star_transformation_enabled = FALSE
    timed_statistics = TRUE
    undo_management = AUTO
    undo_retention = 10800
    undo_tablespace = UNDOTBS1
    user_dump_dest = C:\oracle\admin\MBTuning\udump
    =========================================================
    STAT #1 id=1 cnt=2815 pid=0 pos=1 obj=0 op='SORT ORDER BY '
    STAT #1 id=2 cnt=2815 pid=1 pos=1 obj=0 op='NESTED LOOPS OUTER '
    STAT #1 id=3 cnt=2815 pid=2 pos=1 obj=0 op='VIEW '
    STAT #1 id=4 cnt=2815 pid=3 pos=1 obj=0 op='NESTED LOOPS OUTER '
    STAT #1 id=5 cnt=100 pid=4 pos=1 obj=0 op='NESTED LOOPS '
    STAT #1 id=6 cnt=100 pid=5 pos=1 obj=0 op='NESTED LOOPS '
    =====================
    PARSING IN CURSOR #2 len=116 dep=1 uid=0 oct=3 lid=0 tim=18446744070694916859 hv=189272129 ad='146e1308'
    select o.owner#,o.name,o.namespace,o.remoteowner,o.linkname,o.subname,o.dataobj#,o.flags from obj$ o where o.obj#=:1
    END OF STMT
    PARSE #2:c=0,e=460,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=0,tim=18446744070694916853
    BINDS #2:
    bind 0: dty=2 mxl=22(22) mal=00 scl=00 pre=00 oacflg=08 oacfl2=1 size=24 offset=0
    bfp=053b8ac8 bln=22 avl=04 flg=05
    value=30312
    EXEC #2:c=0,e=261,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=18446744070694917254
    FETCH #2:c=0,e=27,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=4,tim=18446744070694917315
    STAT #1 id=7 cnt=100 pid=6 pos=1 obj=30312 op='TABLE ACCESS BY INDEX ROWID ASSETREFS '
    STAT #1 id=8 cnt=101 pid=7 pos=1 obj=30382 op='INDEX RANGE SCAN ASSETREF_PARENTID '
    BINDS #2:
    bind 0: dty=2 mxl=22(22) mal=00 scl=00 pre=00 oacflg=08 oacfl2=1 size=24 offset=0
    bfp=053b8ac8 bln=22 avl=04 flg=05
    value=30307
    EXEC #2:c=0,e=63,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=18446744070694917526
    FETCH #2:c=0,e=13,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=4,tim=18446744070694917564
    STAT #1 id=9 cnt=100 pid=6 pos=2 obj=30307 op='TABLE ACCESS BY INDEX ROWID ASSETS '
    STAT #1 id=10 cnt=100 pid=9 pos=1 obj=30405 op='INDEX UNIQUE SCAN PK_ASSETS '
    BINDS #2:
    bind 0: dty=2 mxl=22(22) mal=00 scl=00 pre=00 oacflg=08 oacfl2=1 size=24 offset=0
    bfp=053b8ac8 bln=22 avl=04 flg=05
    value=30313
    EXEC #2:c=0,e=58,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=18446744070694917738
    FETCH #2:c=0,e=11,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=4,tim=18446744070694917773
    STAT #1 id=11 cnt=100 pid=5 pos=2 obj=30313 op='TABLE ACCESS BY INDEX ROWID REVISIONS '
    STAT #1 id=12 cnt=100 pid=11 pos=1 obj=30502 op='INDEX RANGE SCAN REVISIONBASE '
    STAT #1 id=13 cnt=2815 pid=4 pos=2 obj=30410 op='INDEX RANGE SCAN PK_METADATA '
    BINDS #2:
    bind 0: dty=2 mxl=22(22) mal=00 scl=00 pre=00 oacflg=08 oacfl2=1 size=24 offset=0
    bfp=053b8ac8 bln=22 avl=04 flg=05
    value=30315
    EXEC #2:c=0,e=58,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=18446744070694917965
    FETCH #2:c=0,e=12,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=4,tim=18446744070694918001
    STAT #1 id=14 cnt=0 pid=2 pos=2 obj=30315 op='TABLE ACCESS BY INDEX ROWID METADATA '
    STAT #1 id=15 cnt=0 pid=14 pos=1 obj=30410 op='INDEX UNIQUE SCAN PK_METADATA '

  • Slow query results for simple select statement on Exadata

    I have a table with 30+ million rows in it which I'm trying to develop a cube around. When the cube processes (sql analysis), it queries back 10k rows every 6 seconds or so. I ran the same query SQL Analysis runs to grab the data in toad and exported results, and the timing is the same, 10k every 6 seconds or so. r
    I ran an execution plan it returns just this:
    Plan
    SELECT STATEMENT  ALL_ROWSCost: 136,019  Bytes: 4,954,594,096  Cardinality: 33,935,576       
         1 TABLE ACCESS STORAGE FULL TABLE DMSN.DS3R_FH_1XRTT_FA_LVL_KPI Cost: 136,019  Bytes: 4,954,594,096  Cardinality: 33,935,576  I'm not sure if there is a setting in oracle (new to the oracle environment) which can limit performance by connection or user, but if there is, what should I look for and how can I check it.
    The Oracle version I'm using is 11.2.0.3.0 and the server is quite large as well (exadata platform). I'm curious because I've seen SQL Server return 100k rows ever 10 seconds before, I would assume an exadata system should return rows a lot quicker. How can I check where the bottle neck is?
    Edited by: k1ng87 on Apr 24, 2013 7:58 AM

    k1ng87 wrote:
    I've notice the same querying speed using Toad (export to CSV)That's not really a good way to test performance. Doing that through Toad, you are getting the database to read the data from it's disks (you don't have a choice in that) shifting bulk amounts of data over your network (that could be a considerable bottleneck) and then letting Toad format the data into CSV format (process the data adding a little bottleneck) and then write the data to another hard disk (more disk I/O = more bottleneck).
    I don't know exedata but I imagine it doesn't quite incorporate all those bottlenecks.
    and during cube processing via SQL Analysis. How can I check to see if its my network speed thats effecting it?Speak to your technical/networking team, who should be able to trace network activity/packets and see what's happening in that respect.
    Is that even possible as our system resides off site, so the traffic is going through multiple networks.Ouch... yes, that could certainly be responsible.
    I don't think its the network though because when I run both at the same time, they both are still querying at about 10k rows every 6 seconds.I don't think your performance measuring is accurate. What happens if you actually do the cube in exedata rather than using Toad or SQL Analysis (which I assume is on your client machine?)

  • Serializing database query results for use as dataproviders

    I am executing my database queries in Java and passing my results to flex and creating a chart or datagrid based on the results.  The problem is, I don't know how to code this in Java in a way that flex would understand it.  For instance, Flex wants:
    <mx:DataGrid id="dg"  dataProvider="{myArray}">
      <mx:columns>
             <mx:DataGridColumn dataField="column1" />
             <mx:DataGridColumn dataField="column2" />
              <!-- etc ... -->
             <mx:DataGridColumn dataField="columnN" />
          </mx:columns>
    </mx:DataGrid>
    I need to create this table-like array in Java and have it readable as a dataprovider by flex.  I know about ArrayCollections and Java ArrayLists can be nicely converted to this in Flex, but I just don't know how to create them with a column1, column2, etc. when I don't know from query to query how many columns will be in my results.
    Any help would be greatly appreciated!

    You don't know in advanced the columns that your queries will return? 
    Or the number of records?
    Sincerely,
    Michael
    El 30/04/2009, a las 13:47, Cr99 <[email protected]> escribió:
    >
    I am executing my database queries in Java and passing my results to 
    flex and creating a chart or datagrid based on the results.  The 
    problem is, I don't know how to code this in Java in a way that flex 
    would understand it.  For instance, Flex wants:
    >
    <mx:DataGrid id="dg"  dataProvider="">
      <mx:columns>
             <mx:DataGridColumn dataField="column1" />
             <mx:DataGridColumn dataField="column2" />
              <!-- etc ... -->
             <mx:DataGridColumn dataField="columnN" />
          </mx:columns>
    </mx:DataGrid>
    >
    I need to create this table-like array in Java and have it readable 
    as a dataprovider by flex.  I know about ArrayCollections and Java 
    ArrayLists can be nicely converted to this in Flex, but I just don't 
    know how to create them with a column1, column2, etc. when I don't 
    know from query to query how many columns will be in my results.
    >
    Any help would be greatly appreciated!
    >

  • Filtering a Query result for a specific value of a key figure

    Hi folks,
    I want to filter a query for a specific value of a key figure. The report should display all customer who has an account balance greater than EUR 500'000.
    Can I only implement this kind of filter via a condition in Query-Designer? If I run that query on an InfoCube with non-cumulative key figures over all our customers (> 400'000) I have problems with the consumption of the memory. If I restrict the amount of the customers to 50'000 - 60'000 then it works. It seems that the query reads all customer records in an internal table before applying the condition.
    Is there another way to deal with? Any help appreciated.
    Kind regards
    Hansjörg Luzi
    Graubündner Kantonalbank
    CH-7002 Chur

    You could try to implement an alternative solution: define a new Characteristc with discrete values (in the following example I suppose to have intervals of 10)
    KF -> CHAR
    115 -> 110 (110 <= KF < 120)
      5 ->   0 (  0 <= KF <  10)
    83 ->  80 ( 80 <= KF <  90)
    Then you can use these dicrete values to filter ...
    I know that it's not so "flexible" (discetization must be predefined), but query response time is quicker and you can use Aggregates ...
    Hope it helps
    GFV

  • HOWTO: Writing Out XML Query Results of Any Size in Java with XML SQL Utility

    A customer mailed me asking for an example of how to use our XML SQL Utility to write out query results for tons of query result rows.
    With tons of data, the getXMLDOM() and getXMLString() methods are not really appropriate due to the size.
    The XML SQL Utility offers a getXMLSAX() method that streams SAX2 events to report the data being queried. This is the approach we can use to handle data of any size.
    It dawned on me today that by putting together the XML SQL Utilities getXMLSAX() routine, and the oracle.xml.parser.v2.XSLSAXPrintDriver SAX2 ContentHandler, we can effectively stream out data of any length to an appropriate writer.
    Here's a code example to get the point across:
    package test;
    import java.io.BufferedOutputStream;
    import java.io.PrintWriter;
    import java.sql.Connection;
    import java.sql.Driver;
    import java.sql.DriverManager;
    import java.sql.SQLException;
    import java.util.Properties;
    import javax.xml.transform.OutputKeys;
    import oracle.jdbc.OracleDriver;
    import oracle.xml.parser.v2.XSLException;
    import oracle.xml.parser.v2.XSLOutput;
    import oracle.xml.parser.v2.XSLSAXPrintDriver;
    import oracle.xml.sql.query.OracleXMLQuery;
    public class Example  {
      private static final String QUERY = "select * from emp";
      public static void main(String[] args) throws Throwable  {
          Connection conn = getConnection();
          OracleXMLQuery q = new OracleXMLQuery(getConnection(),QUERY);
          // Any printwriter will do. Here's we're output to standard out.
          PrintWriter output = new PrintWriter(new BufferedOutputStream(System.out));
          // This is a SAX2 Content Handler used by the Oracle XSLT Engine
          // to serialize a stream of sax2 events as an XML document
          // We'll use it to serialize the sax2 events from the XML SQL Utility
          // out as an XML document.
          XSLSAXPrintDriver ch = new XSLSAXPrintDriver(output, outputOptions());  
          // This asks XML SQL Utility to fire sax events for the data
          // being fetched instead of creating DOM nodes or returning text.
          // By using the XSLSAXPrintDriver content handler, these events
          // get handled by writing them directly to the output stream
          q.getXMLSAX(ch);
          ch.flush();
          q.close();
          conn.close();     
      private static XSLOutput outputOptions() throws XSLException {
        XSLOutput x = new XSLOutput();
        Properties props = new Properties();
        props.put(OutputKeys.METHOD,"xml");
        props.put(OutputKeys.INDENT,"yes");  // Set to "no" for non-indented
        x.setProps(props);
        return x;
      public static Connection getConnection() throws SQLException {
        String username = "scott";
        String password = "tiger";
        String thinConn = "jdbc:oracle:thin:@localhost:1521:ORCL";
        Driver d = new OracleDriver();
        return DriverManager.getConnection(thinConn,username,password);
    }

    Hi Uber,
    This is a known issue that error occurs when running report "Count of instances of specific software registered with Add or Remove Programs" due to non-printable characters for XML. Based on internal research, the hotfix for this issue will be
    included in the System Center 2012 Configuration Manager Service Pack 1.
    As a workaround, you can remove the nonprintable character populated into the report parameter by referring to the following KB article:
    http://support.microsoft.com/KB/914159
    Hope this helps.
    Regards,
    Mike Yin
    Mike Yin
    TechNet Community Support

  • SQL Developer v2.1.0.6 Build MAIN-63.73 doesn't show query results

    SQL Developer v2.1.0.6 Build MAIN-63.73 doesn't show query results for "select * from all_scheduler_jobs;" when I click on "Run Statement" icon at toolbar but when I click on "Run Script" the results appear.
    Is this a bug?
    In prior version this worked.

    This bug was fixed in SqlDev 2.1.1. You'll find this build on our homepage.
    -Raghu

  • InfoSet query results confusion...

    HI All,
    We have 2 ODS.  One is Billing and the other is Project Sales.  In the billing ODS we have billing documents that have the billing date (calmonth) and employee.  The projected sales is an ODS that has for each employee for each month, their Projected Sales.
    We have created an InfoSet and linked in the billing ODS the calmonth (in billing) to period field (in Projected Sales) and 0SALESEMPLOYEE (in billing) to PERNR (in Projected) ODS.
    The Projected Sales ODS has exactly 1 entry for each employee for each month.  i.e. employee '12345' for 200501 a projected sales of 10,000.  With the above mapping the query results for projected sales is about 50X more.  In other words, it shows 30 million when it should be 150,000.  I think this has to do with it somehow including results from the billing ODS??
    We've read help about interpreting infoset query results and it's quite confusing.  It speaks of some filter option to exclude query result set, but can't figure it out.
    In addition to the monthly projected sales we want to see for each employee, we want to include all the net sales they had for the same month.  So you can see the mapping we did above.
    Query would look like this:
    Calmonth  Employee  NetSales  ProjSales
    200501    123456    220,000   10,000
    The projected sales is entirely off by millions of dollars and also the net value isn't calculating correctly.  I'm sure out setup is incorrect.
    Can someone help us out please.
    Thanks
    Mike

    Hi Ashish,
    What you explained is exactly what is the case.
    The billing ODS has many, many entries for the for the employee + calmonth combination while the projected sales only has 1.  I've figured out that this is what is happening...multiplying the number of entries in the billing X the Projected Sales.
    Is there a way to avoid this with a particular join in InfoSet? 
    The key for billing ODS has billing number, billing item and fiscal year variant, while the key for projected sales is start period and pernr.  So I don't see how a multiprovider would work because the fields are different...am I incorrect?
    Thanks for your quick reply,
    Mike

  • Check the bex queries results for creating dashboards?

    Hi All,
    How can i check the bex query results for creating dashboards, is it possible to view the bex query results before creating the dashboards?
    help me the steps...
    Thanks in advance..

    Yes you can check the query result first in SAP BW Server first using RSRT -TCODE with your query name, where you can do complete debug and  check the query result.
    Create a query, drag and drop the dimensions and keyfigure in the window and click the preview window to check the query result at the bottom of the window.
    Revert for any assistance.

  • Crystal Report is very slow to fetch the results of a Query used in Command

    Hello All,
    We have a query which is used in command object in Business Objects Crystal reports XI. This report is executed against IBM DB2 database. It is taking 40 min to getback the results. Here is the Query. Please suggest how can i improve the performance of this query in report as another reporting tool is taking 2-3 min time to fetch the results with the same query.
    SELECT
    A1.CARR_NUM ,  A1.CST_CTR_NUM ,A1.CLM_CLOSE_STS_CD ,   A1.CLM_RJCT_STS_CD ,A1.CLM_VOID_STS_CD ,  A1.PMT_MTCH_DT , A1.ASC_TRANS_TYP_CD ,  A1.DRUG_DESC ,               
    A1.BATCH_NUM ,   T5.TPA_RJCT_CD ,  A1.DIV_NUM , A1.INVC_NUM ,  A1.SERVICE_DT ,  A1.ACTL_REFILL_NUM ,  A1.CLM_DSPSN_CD ,  A1.INVC_RSBMT_CD , A1.TP_TOTAL_DUE_AMT ,  
    A1.INTL_BILG_MTHD_CD , A1.CARR_EOB_1_CD ,  T5.CARR_EOB_CD_DESC ,  A1.NTNL_DRUG_CD ,  A1.PTNT_FRST_NM ,  A1.PTNT_MDDL_INTL ,  A1.PTNT_LST_NM ,  A1.PTNT_BIRTH_DT ,
    A1.CRDHLDR_ID_NUM   
    FROM
    (SELECT DISTINCT
        T1.CARR_NUM ,  T1.CST_CTR_NUM ,  T1.CLM_CLOSE_STS_CD ,  T1.CLM_RJCT_STS_CD ,   T1.CLM_VOID_STS_CD ,  T3.PMT_MTCH_DT ,   
        T3.ASC_TRANS_TYP_CD ,  T2.DRUG_DESC ,  T4.BATCH_NUM , T1.DIV_NUM ,  T1.INVC_NUM ,  T1 . SERVICE_DT ,  T1.ACTL_REFILL_NUM ,                                         
        T1.CLM_DSPSN_CD ,  T1.INVC_RSBMT_CD ,T1.TP_TOTAL_DUE_AMT ,  T1.INTL_BILG_MTHD_CD ,T3.CARR_EOB_1_CD ,T1.NTNL_DRUG_CD ,  T2.PTNT_FRST_NM ,         T2.PTNT_MDDL_INTL , T2.PTNT_LST_NM ,  T2.PTNT_BIRTH_DT ,  T2.CRDHLDR_ID_NUM 
       FROM  
                    PDRD2LO.VINVC_CLM T1                             
                   , PDRD2LO.VPMNT_DTL T3                             
                   , PDRD2LO.VINVC_CLM_2 T2                           
                   , PDRD2LO.VINVC_ITM_PHARM  T4                     
                        WHERE  
      T1.CLM_CLOSE_STS_CD ='OP'                     
                  AND  T1.CLM_RJCT_STS_CD ='RJ'                       
                  AND  T1.CLM_VOID_STS_CD ='NV'                       
                  AND  T3.ASC_TRANS_TYP_CD ='40'                      
                  AND  T1.CARR_NUM = T3.CARR_NUM                      
               AND   T1.CST_CTR_NUM = T3.CST_CTR_NUM                  
               AND   T1.DIV_NUM = T3.DIV_NUM                          
               AND   T1.INVC_NUM = T3.INVC_NUM                        
               AND   T1.SERVICE_DT = T3.SERVICE_DT                    
               AND   T1.CLM_CTRL_NUM = T3.CLM_CTRL_NUM                
               AND   T1.ACTL_REFILL_NUM = T3.ACTL_REFILL_NUM          
               AND   T1.INVC_ITM_NUM = T3.INVC_ITM_NUM                
               AND   T1.REFILL_TRANS_NUM = T3.REFILL_TRANS_NUM        
               AND   T1.CARR_NUM = T2.CARR_NUM                        
               AND   T1.CST_CTR_NUM = T2.CST_CTR_NUM                  
               AND   T1.DIV_NUM = T2.DIV_NUM                          
               AND   T1.INVC_NUM = T2.INVC_NUM                        
               AND   T1.SERVICE_DT = T2.SERVICE_DT                  
               AND   T1.ACTL_REFILL_NUM = T2.ACTL_REFILL_NUM        
               AND   T1.CLM_CTRL_NUM = T2.CLM_CTRL_NUM              
               AND   T1.INVC_ITM_NUM = T2.INVC_ITM_NUM              
               AND   T1.REFILL_TRANS_NUM = T2.REFILL_TRANS_NUM      
               AND   T1.INVC_CTRL_NUM = T4.INVC_CTRL_NUM) A1        
    LEFT OUTER JOIN  PDRD2LO.VCARR_EOB T5                
                ON    A1.CARR_EOB_1_CD = T5.CARR_EOB_CD             
                AND   A1.CARR_NUM = T5.CARR_NUM                     
                AND   A1.DIV_NUM = T5.DIV_NUM        
    Thanks!

    Hi Zilla Eh ,
    Thanks for the reply.
    The another reporting tool name is impromptu and it takes 2-3 min , where in Crystal report is taking around 40 min. Please let me know if you any other questions.
    Regards
    Ravi

  • How to ensure that the query results are fetched from the BW acclerator.

    Hi Experts,
    Suppose if i want to execute the query with BIA option, I can achieve it in RSRT .
    1)But is there some query settings or properties where we can make sure that whenever a query(say Q1) is executed, always the data should be fetched from the BIA index.
    2)Suppose the BIA indexing has failed due  to some reasons, will the data be fetched from the previously updated BIA index??
    regards
    akshay

    Hi man,
    1)But is there some query settings or properties where we can make sure that whenever a query(say Q1) is executed, always the data should be fetched from the BIA index.
    If you have built up BIA index on a cube, then every query on the cube or MultiProvider will automatically use BIA index. No additional settings required. You can use the following way to test whether a query is using BIA or not:
    Go to RSRT, input query name, click 'execute and debug' and check 'Display Statistics Data' and 'Do Not Use Cache'. Execute. After the query result press F3, now you should be able to see statistics. There's a tab 'BIA Access'. If you see something there then it means BIA index is used.
    2)Suppose the BIA indexing has failed due to some reasons, will the data be fetched from the previously updated BIA index??
    If BIA rollup is failed, then the last request(s) would not available in reporting and query will use BIA index for earlier requests. You can consider it like aggregate. But if initial fill of BIA failed then you would not be able to use BIA index.
    Regards,
    Frank
    Edited by: Frank Lee on Jul 14, 2009 2:22 PM

  • Script for Save Options- Save Query Results with document

    Hi,
    Is there any javascript for Save query results with document in save options? If so where can i find that?
    Thanks
    Kalai

    hi guys,
    I found the answer.The following example shows you how to save the results with the query
    section named “SalesQuery”.
    ActiveDocument.Sections["SalesQuery"].SaveResults=true

  • Query Results not coming for SELECT * in sql developer 3.1.07

    VERY FRUSTRATING to use sql developer.
    In SQL DEVELOPER 3.1.07:
    WHAT WORKS: select sequence# from v$archived_log
    WHAT DOESN'T WORK : select * from v$archived_log
    You've got to be kidding me.
    I wish I can show a screenshot, but can't in this forum. Here's my description:
    When I run the query, in the bottom frame for Query Results, it just says: Executing: select * from v$archived_log in 0 seconds.
    Then there's no result grid.
    WOw

    >
    This was when connected to a standby db that's in mount state
    >
    Is this a 'manual' standby db? Are you using Data Guard?
    Post the results for what Gary ask of running the queries in sql*plus. If those queries give the same result then this is NOT a sql developer issue.
    Review the following thread and post back with what information in it applies to your use case.
    Re: V$ARCHIVED_LOG returns no rows on manual standby
    If you are using Data Guard you may want to cross-post in that forum.

  • Different results for same query on different servers

    1. database being converted from 7.3.4 to 9.2
    2. query is executed on both servers
    3. different results!
    The tables and data are the same, and indexes are setup on new server. EXPLAIN PLAN was run on both but with (predictably) widely different results.
    I know, not much detail here, but has anyone ever ran into anything like this?
    Thanks,
    Pat

    Folks,
    Here is the relevant information for this problem. The attributes have been changed for confidentiality. If you need any other information let me know.
    Thanks,
    Pat
    Query:
    SELECT
    one,two,three,four,a.five,six,seven,
    a.eight,a.nine,
    to_char(ten,'99999.99'),to_char(eleven,'99999.99'),
    to_char(twelve,'99999.99'),to_char(thirteen,'99999.99'),
    fourteen,fifteen,sixteen,seventeen,eighteen,
    nineteen,twenty,twentyone
    FROM
    table1 a,
    table2 b
    WHERE
    a.five = b.five
    AND one = 'X'
    AND six = ' '
    AND three != 12345
    AND three IN
    (SELECT DISTINCT
    three
    FROM table3
    WHERE
    one = 'X'
    AND twentytwo = 'XYZ'
    AND twentythree != 0
    AND twentyfour != 0
    AND twentyfive = 'Y'
    AND (a.five IN
    (SELECT DISTINCT
    five
    FROM table1
    WHERE
    one = 'X'
    AND three IN (12345)
    Table1:
    one          varchar2(1)     not null     PK
    two          varchar2(20)     not null     PK
    three          number(5)     not null     PK
    four          number(2)     not null     PK
    five          number(6)     not null     PK
    six          varchar2(4)     not null     PK
    seven          number(4)     not null
    eight          varchar2(8)     not null
    nine          date          not null
    ten          number(7,2)
    eleven     number(7,2)
    twelve     number(7,2)
    thirteen     number(7,2)
    Table2:
    five          number(6)     not null     PK
    fourteen     varchar2(5)     not null
    fifteen     varchar2(5)     not null
    sixteen     varchar2(2)     not null
    seventeen     varchar2(35)     not null
    eighteen     varchar2(4)     not null
    nineteen     varchar2(2)
    twenty     varchar2(1)
    twentyone     number(6)
    Table3:
    one          varchar2(1)     not null     PK
    twentytwo     varchar2(12)     not null     PK
    two          varchar2(20)     not null     PK
    three          number(5)     not null     PK
    four          number(2)     not null     PK
    twentysix     number(2)     not null     PK
    twentyfive     varchar2(1)     not null
    nine          date          not null
    twentythree     number(13,8)     not null
    twentyfour     number(12,8)     not null
    twentyseven     varchar2(1)     not null
    eight          varchar2(8)     not null
    Explain - Oracle9i (default, with CBO):
    SELECT STATEMENT
    RECURSIVE EXECUTION SYS_LE_3_0
    RECURSIVE EXECUTION SYS_LE_3_1
    TEMP TABLE TRANSFORMATION
    HASH JOIN
    HASH JOIN
    HASH JOIN
    TABLE ACCESS FULL SYS_TEMP_0FD9D6603_AB874A
    TABLE ACCESS BY INDEX ROWID TABLE1
    BITMAP CONVERSION TO ROWIDS
    BITMAP AND
    BITMAP MERGE
    BITMAP KEY ITERATION
    TABLE ACCESS FULL SYS_TEMP_0FD9D6603_AB874A
    BITMAP CONVERSION FROM ROWIDS
    INDEX RANGE SCAN TABLE1INDEX03
    BITMAP MERGE
    BITMAP KEY ITERATION
    TABLE ACCESS FULL SYS_TEMP_0FD9D6602_AB874A
    BITMAP CONVERSION FROM ROWIDS
    INDEX RANGE SCAN TABLE1INDEX04
    TABLE ACCESS FULL SYS_TEMP_0FD9D6602_AB874A
    TABLE ACCESS FULL TABLE2
    Query results - Oracle9i (with CBO):
    X|A | 1| 0| 22| | 1|08-MAR-02|XYZ | | 24.00| | |ABCDE|FGHIJ|AB|ABCDEFG |ABC | | | |
    X|B | 2| 1| 22| | 7|04-DEC-01|XYZ | | 3.25| 8.00| |ABCDE|FGHIJ|AB|ABCDEFG |ABC | | | |
    X|C | 3| 1| 22| | 1|14-AUG-02|XYZ | | 10.35| | |ABCDE|FGHIJ|AB|ABCDEFG |ABC | | | |
    X|D | 4| 1| 22| | 6|11-JUL-02|XYZ | | .00| 13.90| |ABCDE|FGHIJ|AB|ABCDEFG |ABC | | | |
    X|E | 5| 5| 23| | 1|22-FEB-01|XYZ | | 211.80| | |ABCDE|EFGHI|AB|HIJKLMN |DEF | | | |
    X|E | 5| 5| 23| | 1|22-FEB-01|XYZ | | 211.80| | |ABCDE|EFGHI|AB|HIJKLMN |DEF | | | |
    X|E | 5| 5| 23| | 1|22-FEB-01|XYZ | | 211.80| | |ABCDE|EFGHI|AB|HIJKLMN |DEF | | | |
    X|E | 5| 5| 23| | 1|22-FEB-01|XYZ | | 211.80| | |ABCDE|EFGHI|AB|HIJKLMN |DEF | | | |
    X|E | 5| 5| 23| | 1|22-FEB-01|XYZ | | 211.80| | |ABCDE|EFGHI|AB|HIJKLMN |DEF | | | |
    (approximately 4550 rows returned)
    Explain - Oracle9i (ALTER SESSION SET OPTIMIZER_METHOD=RULE;):
    SELECT STATEMENT
    MERGE JOIN
    SORT JOIN
    NESTED LOOPS
    NESTED LOOPS
    VIEW VW_NSO_1
    SORT UNIQUE
    TABLE ACCESS BY INDEX ROWID TABLE3
    INDEX RANGE SCAN TABLE3INDEXPK
    TABLE ACCESS BY INDEX ROWID TABLE1
    INDEX RANGE SCAN TABLE1INDEX03
    TABLE ACCESS BY INDEX ROWID TABLE2
    INDEX UNIQUE SCAN TABLE2INDEXPK
    SORT JOIN
    VIEW VW_NSO_2
    SORT UNIQUE
    TABLE ACCESS BY INDEX ROWID TABLE1
    INDEX RANGE SCAN TABLE1INDEX03
    Explain - Oracle 7.3.4:
    SELECT STATEMENT
    MERGE JOIN
    SORT JOIN
    NESTED LOOPS
    NESTED LOOPS
    VIEW
    SORT UNIQUE
    TABLE ACCESS BY ROWID TABLE3
    INDEX RANGE SCAN TABLE3INDEX03
    TABLE ACCESS BY ROWID TABLE1
    INDEX RANGE SCAN TABLE1INDEX03
    TABLE ACCESS BY ROWID TABLE2
    INDEX UNIQUE SCAN TABLE2INDEXPK
    SORT JOIN
    VIEW
    SORT UNIQUE
    TABLE ACCESS BY ROWID TABLE1
    INDEX RANGE SCAN TABLE1INDEX03
    Query results - Oracle 7.3.4:
    X|A | 1| 0| 22| | 1|08-MAR-02|XYZ | | 24.00| | |ABCDE|FGHIJ|AB|ABCDEFG |ABC | | | |
    X|B | 2| 1| 22| | 7|04-DEC-01|XYZ | | 3.25| 8.00| |ABCDE|FGHIJ|AB|ABCDEFG |ABC | | | |
    X|C | 3| 1| 22| | 1|14-AUG-02|XYZ | | 10.35| | |ABCDE|FGHIJ|AB|ABCDEFG |ABC | | | |
    X|D | 4| 1| 22| | 6|11-JUL-02|XYZ | | .00| 13.90| |ABCDE|FGHIJ|AB|ABCDEFG |ABC | | | |
    X|E | 5| 5| 23| | 1|22-FEB-01|XYZ | | 211.80| | |ABCDE|EFGHI|AB|HIJKLMN |DEF | | | |
    (approximately 1150 rows returned)
    Indexes (used):
    TABLE1INDEX03     (three)
    TABLE1INDEX04     (five)
    TABLE2INDEXPK     (five)
    TABLE3INDEXPK     (one,twentytwo,two,three,four,twentysix)
    Discussion:
    Notice the repeating result for X,E,5,5,23,.... in the Oracle9i retrieval using CBO.
    The execution of this query returned approximately four times the number of rows as
    Oracle 7.3.4 using RBO. This is a function of how many "fours" there are in table3
    that have matching "threes and fours" in table1. If you put a DISTINCT clause at the
    very front of the query (SELECT DISTINCT one,two,three ....) then the result is
    accurate and the same as Oracle 7.3.4. In one execution of this query, 27,000 rows
    are expected to be returned, but the server returned over 1 million rows!
    When RBO was used on Oracle9i, the results were identical to the ones on Oracle 7.3.4.

  • RPD - Cannot obtain number of columns for the query result :Working with MS SQL 2012 schema

    Hi All,
    I have created my warehouse in MS SQL 2012.
    For management purpose, I have created different schemas in SQL database
    In RPD, Physical layer, when i view data > I get error as
    [nQSError:16002] Cannot obtain number of columns for the query result.
    [nQSError:16001] ODBC error state : S0002 code : 208 message: [Microsoft][ODBC SQL Server Driver][SQL Server] Invalid object name 'tbl'..
    [nQSError:16001] ODBC error state : S0002 code : 208 message: [Microsoft][ODBC SQL Server Driver][SQL Server] Statements could not be prepared..
    I have already browsed : OBIEE 11g Strange ODBC Driver Error with SQL Server : Total Business Intelligence ... did not help me
    please help!!!

    Hi All,
    After all R&D it is been found that Oracle business administrator( RPD) needs default dbo schema. It doesn't accept custom schema for pulling data.
    If anybody have other views please share.!!
    Thank you

Maybe you are looking for