Question about the Query Optimizer

For an Excercise during my database lecture the following table
CREATE TABLE TASKS
    "ID" NUMBER NOT NULL ENABLE,
    "START_DATE" DATE,
    "END_DATE" DATE,
    "DESCRIPTION" VARCHAR2(50 BYTE)
) ;with about 1.5 Million Entries were given. In addition there was the following Query:
SELECT START_DATE, COUNT(START_DATE) FROM TASKS
GROUP BY START_DATE
ORDER BY START_DATE;And the Index:
create index blub on Tasks (start_date asc);The main excercise was to speed up queries with indexes. Because all data is accessed the optimizer ignores the index and just did a full table scan.
Here the QEP:
| Id  | Operation          | Name  | Rows  | Bytes | Cost (%CPU)| Time     |                                                                                                                                                                                                                                
|   0 | SELECT STATEMENT   |       |  9343 | 74744 |  3423   (6)| 00:00:42 |                                                                                                                                                                                                                                
|   1 |  SORT GROUP BY     |       |  9343 | 74744 |  3423   (6)| 00:00:42 |                                                                                                                                                                                                                                
|   2 |   TABLE ACCESS FULL| TASKS |  1981K|    15M|  3276   (2)| 00:00:40 |                                                                                                                                                                                                                                
----------------------------------------------------------------------------Then we tried to force him to do the index with this query:
ALTER SESSION SET OPTIMIZER_MODE = FIRST_ROWS_1;
SELECT /* + INDEX (TASKS BLUB) */ START_DATE, COUNT(START_DATE) FROM TASKS
GROUP BY START_DATE
ORDER BY START_DATE;but again it ignored the index. The optimizer guide states clearly, that each time you access all data from a table it has to do a full scan.
So we tricked him in doing a fast index scan with this query:
create or replace function bla
return date deterministic is
  ret date;
begin
  select MIN(start_date) into ret from Tasks;
  return ret;
end bla;
ALTER SESSION SET OPTIMIZER_MODE = FIRST_ROWS_1;
SELECT /* + INDEX (TASKS BLUB) */ START_DATE, COUNT(START_DATE) FROM TASKS
where start_date >= bla
GROUP BY START_DATE
ORDER BY START_DATE; now we got the following QEP:
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |                                                                                                                                                                                                                               
|   0 | SELECT STATEMENT     |      |     1 |     8 |     3   (0)| 00:00:01 |                                                                                                                                                                                                                               
|   1 |  SORT GROUP BY NOSORT|      |     1 |     8 |     3   (0)| 00:00:01 |                                                                                                                                                                                                                               
|*  2 |   INDEX RANGE SCAN   | BLUB |     1 |     8 |     3   (0)| 00:00:01 |                                                                                                                                                                                                                               
----------------------------------------------------------------------------- So it do use the index.
Now to my two questions:
1. Why should it always do a full scan (because the optimizer documentation answer is a little bit unsatisfying)?
2. After looking at the difference between the costs (FS: 3276 IR: 3) and the time the system needs (FS: 9,6 sec IR: 4,45) why does the optimizer refuse the clearly better plan?
Thanks in advance,
Kai Gödde
Edited by: Kai Gödde on May 30, 2011 6:54 PM
Edited by: Kai Gödde on May 30, 2011 6:56 PM

John Spencer mentioned already the most important point (the role of NULL values) but here are some minor additions:
* with the additional NOT NULL condition Oracle will do an INDEX FAST FULL SCAN (a multiblock scan of the index segment, similar to a FULL TABLE SCAN)
* with the additional NOT NULL condition you don't need a hint and Oracle will choose the IFFS access
* if the index was a bitmap index you would not need a NOT NULL condition because Oracle stores NULL-values in bitmap indexes (but you should not define bitmap indexes in an OLTP system because they bring massive locking issues)
* if the index would contain a second value the additional NOT NULL condition would also be needless
CREATE TABLE TASKS
    "ID" NUMBER NOT NULL ENABLE,
    "START_DATE" DATE,
    "END_DATE" DATE,
    "DESCRIPTION" VARCHAR2(50 BYTE)
insert into tasks
select rownum
     , case when mod(rownum, 5) = 0 then null else sysdate - round(rownum/100) end
     , case when mod(rownum, 4) = 0 then null else sysdate - round(rownum/50) end
     , lpad('*', 50 , '*')
  from dual
connect by level <= 1500000;
exec dbms_stats.gather_table_stats(user, 'TASKS')
-- the IFFS access without a hint:
create index blub on Tasks (start_date);
SELECT START_DATE
     , COUNT(START_DATE)
  FROM TASKS
WHERE START_DATE IS NOT NULL
GROUP BY START_DATE
ORDER BY START_DATE;
| Id  | Operation             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT      |      | 15001 |   102K|  1871  (16)| 00:00:10 |
|   1 |  SORT GROUP BY        |      | 15001 |   102K|  1871  (16)| 00:00:10 |
|*  2 |   INDEX FAST FULL SCAN| BLUB |  1200K|  8203K|  1631   (3)| 00:00:09 |
Statistiken
          0  recursive calls
          0  db block gets
       3198  consistent gets
          0  physical reads
          0  redo size
     378627  bytes sent via SQL*Net to client
      11520  bytes received via SQL*Net from client
       1002  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      15001  rows processed
-- the bitmap index
create bitmap index blub on tasks(START_DATE);
SELECT START_DATE
     , COUNT(START_DATE)
  FROM TASKS
GROUP BY START_DATE
ORDER BY START_DATE
| Id  | Operation                    | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT             |      | 15001 |   102K|   132   (1)| 00:00:01 |
|   1 |  SORT GROUP BY NOSORT        |      | 15001 |   102K|   132   (1)| 00:00:01 |
|   2 |   BITMAP CONVERSION TO ROWIDS|      |  1500K|    10M|   132   (1)| 00:00:01 |
|   3 |    BITMAP INDEX FULL SCAN    | BLUB |       |       |            |          |
Statistiken
          1  recursive calls
          0  db block gets
       1126  consistent gets
          0  physical reads
          0  redo size
     378682  bytes sent via SQL*Net to client
      11520  bytes received via SQL*Net from client
       1002  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      15002  rows processed
-- the composite index
create index blub on tasks(START_DATE, 0);
SELECT START_DATE
     , COUNT(START_DATE)
  FROM TASKS
GROUP BY START_DATE
ORDER BY START_DATE;
| Id  | Operation             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT      |      | 15001 |   102K|  2400  (15)| 00:00:12 |
|   1 |  SORT GROUP BY        |      | 15001 |   102K|  2400  (15)| 00:00:12 |
|   2 |   INDEX FAST FULL SCAN| BLUB |  1500K|    10M|  2095   (3)| 00:00:11 |
Statistiken
          0  recursive calls
          0  db block gets
       4120  consistent gets
          0  physical reads
          0  redo size
     378682  bytes sent via SQL*Net to client
      11520  bytes received via SQL*Net from client
       1002  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      15002  rows processedRegards
Martin Preiss

Similar Messages

  • Some question about the query designer

    hello, dear all,
    I am a new comer of here, and I am intersting in BI, but I have no basic knowledge about it.
    so I just want someone could give me some advice about it.
    our boss need I do the developer about the query designer,  I just have searched in this forum. but nothing founded for I am a new comer here,
    I heard there are some training document of the query designer, could someone give me the URL, thanks.

    Hi,
    Query desinger is used to develop a Query, Query can be created on following data targets
    -Info Cube
    -DSO
    Virtual data target
    -MultiCube
    -Infoset
    -Multiprovider
    We have 5 section in query designer
    - Infoprovider : where we select the data target , on which report to be created
    -Filter : to restrict value at infoprovider level ( if you want data for year 2008, for example)
    -Free Characterstic : this allow you to drill down
    -Columns : char/keyfigs to be display in columns can be added here
    Row: key/char to be display in Rows can be added here
    gv me ur mailid i will let u have Bex manual,
    I would suggest , if you have any IDES ( training ) system , where you can log in and then go to RRMX,
    and try to create new query and add any data target ( which is already created ) and then drag and drop the char/key fig to the required section ,
    save it and execute it .....
    if you play arround and see the output , that would help u to understand how to work with query designer.
    Hope this helps
    Sukhi
    Edited by: Sukhvidner Singh on Nov 4, 2009 5:36 PM

  • Two questions about the query and lsmw .

    Hi Experts ,
    Could you tell me how to download the query to local driver ? is it possible to do that ?
    I hear somebody introduce lsmw this t-code . it can transfer the data from non-sap/r3 system ? has somebody can fully expain this t-code ?
    Many thanks !!!
    Best Regards,
    Carlos Z

    Hi,
          LSMW – Step by Step Guide: Legacy System Migration Workbench is an R/3 Based tool for data transfer from legacy to R/3 for one time or periodic transfer.
    Basic technique is Import data from Spreadsheet / Sequential file, convert from source format to target format and import into R/3 database. LSMW not part of standard R/3, if we need this product email [email protected]
    Advantages of LSMW:
        • Most of the functions are within R/3, hence platform independence.
       • Quality and data consistency due to standard import techniques.
       • Data mapping and conversion rules are reusable across projects.
       • A variety of technical possibilities of data conversion.
       • Generation of the conversion program on the basis of defined rules
       • Interface for data in spreadsheet format.
       • Creation of data migration objects on the basis of recorded transactions.
       • Charge-free for SAP customers and partners.
    Working With LSMW:
    Use TCODE LSMW
    Objects of LSMW:
      •Project   – ID with max of 10 char to Name the data transfer project.
      • Subproject   – Used as further structuring attribute.
      • Object   – ID with max of 10 Characters, to name the Business object .
      • Project can have multiple sub projects and subprojects can have multiple objects.
      • Project documentation displays any documentation maintained for individual pop ups and processing steps
    User Guide: Clicking on Enter leads to interactive user guide which displays the Project name, sub project name and object to be created.
    Object type and import techniques:
      • Standard Batch / Direct input.
      • Batch Input Recording
          o If no standard programs available
          o To reduce number of target fields.
          o Only for fixed screen sequence.
        • BAPI
        • IDOC
          o Settings and preparations needed for each project
    Preparations for IDOC inbound processing:
        • Choose settings -> IDOC inbound processing in LSMW
        • Set up File port for file transfer, create port using WE21.
        • Additionally set up RFC port for submitting data packages directly to function module IDoc_Inbound_Asynchronous, without creating a file during data conversion.
        • Setup partner type (SAP recommended ‘US’) using WE44.
        • Maintain partner number using WE20.
        • Activate IDOC inbound processing.
        • Verify workflow customizing.
    Steps in creating LSMW Project:
        • Maintain attributes – choose the import method.
        • Maintain source structure/s with or without hierarchical relations. (Header, Detail)
        • Maintain source fields for the source structures. Possible field types – C,N,X, date, amount and packed filed with decimal places.
        • Fields can be maintained individually or in table form or copy from other sources using upload from a text file
        • Maintain relationship between source and target structures.
        • Maintain Field mapping and conversion rules
        • For each Target field the following information is displayed:
          o Field description
          o Assigned source fields (if any)
          o Rule type (fixed value, translation etc.)
          o Coding.
          o Some fields are preset by the system & are marked with Default setting.
        • Maintain Fixed values, translations, user defined routines – Here reusable rules can be processed like assigning fixed values, translation definition etc.
        • Specify Files
          o Legacy data location on PC / application server
          o File for read data ( extension .lsm.read)
          o File for converted data (extension .lsm.conv)
        • Assign Files – to defined source structures
        • Read data – Can process all the data or part of data by specifying from / to transaction numbers.
        • Display read data – To verify the input data being read
        • Convert Data – Data conversion happens here, if data conversion program is not up to date, it gets regenerated automatically.
        • Display converted data – To verify the converted data
    Import Data – Based on the object type selected
        • Standard Batch input or Recording
          o Generate Batch input session
          o Run Batch input session
        • Standard Direct input session
          o Direct input program or direct input transaction is called
    BAPI / IDOC Technique:
        • IDOC creation
          o Information packages from the converted data are stored on R/3 Database.
          o system assigns a number to every IDOC.
          o The file of converted data is deleted.
        • IDOC processing
          o IDOCS created are posted to the corresponding application program.
          o Application program checks data and posts in the application database.
    Finally Transport LSMW Projects:
        • R/3 Transport system
          o Extras ->Create change request
          o Change request can be exported/imported using CTS
        • Export Project
          o Select / Deselect part / entire project & export to another R/3 system
        • Import Project
          o Exported mapping / rules can be imported through PC file
          o Existing Project data gets overwritten
          o Prevent overwriting by using
        ‘Import under different name
        • Presetting for Inbound IDOC processing not transportable.
    Regards

  • A question about the impact of SQL*PLUS SERVEROUTPUT option on v$sql

    Hello everybody,
    SQL> SELECT * FROM v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE    11.2.0.1.0  Production
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    SQL>
    OS : Fedora Core 17 (X86_64) Kernel 3.6.6-1.fc17.x86_64I would like to ask a question about the SQL*Plus SET SERVEROUTPUT ON/OFF option and its impact on queries on views such as v$sql and v$session. Here is the problem
    Actually I define three variables in SQL*Plus in order to store sid, serial# and prev_sql_id columns from v$session in order to be able to use them later, several times in different other queries, while I'm still working in the current session.
    So, here is how I proceed
    SET SERVEROUTPUT ON;  -- I often activate this option as the first line of almost all of my SQL-PL/SQL script files
    SET SQLBLANKLINES ON;
    VARIABLE mysid NUMBER
    VARIABLE myserial# NUMBER;
    VARIABLE saved_sql_id VARCHAR2(13);
    -- So first I store sid and serial# for the current session
    BEGIN
        SELECT sid, serial# INTO :mysid, :myserial#
        FROM v$session
        WHERE audsid = SYS_CONTEXT('UserEnv', 'SessionId');
    END;
    PL/SQL procedure successfully completed.
    -- Just check to see the result
    SQL> SELECT :mysid, :myserial# FROM DUAL;
        :MYSID :MYSERIAL#
           129   1067
    SQL> Now, let's say that I want to run the following query as the last SQL statement run within my current session
    SELECT * FROM employees WHERE salary >= 2800 AND ROWNUM <= 10;According to Oracle® Database Reference 11g Release 2 (11.2) description for v$session
    http://docs.oracle.com/cd/E11882_01/server.112/e25513/dynviews_3016.htm#REFRN30223]
    the column prev_sql_id includes the sql_id of the last sql statement executed for the given sid and serial# which in the case of my example, it will be the above mentioned SELECT query on the employees table. As a result, right after the SELECT statement on the employees table I run the following
    BEGIN
        SELECT prev_sql_id INTO :saved_sql_id
        FROM v$session
        WHERE sid = :mysid AND serial# = :myserial#;
    END;
    PL/SQL procedure successfully completed.
    SQL> SELECT :saved_sql_id FROM DUAL;
    :SAVED_SQL_ID
    9babjv8yq8ru3
    SQL> Having the value of sql_id, I'm supposed to find all information about cursor(s) for my SELECT statement and also its sql_text value in v$sql. Yet here is what I get when I query v$sql upon the stored sql_id
    SELECT child_number, sql_id, sql_text
    FROM v$sql
    WHERE sql_id = :saved_sql_id;
    CHILD_NUMBER   SQL_ID          SQL_TEXT
    0              9babjv8yq8ru3    BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;Therefore instead of
    SELECT * FROM employees WHERE salary >= 2800 AND ROWNUM <= 10;for the value of sql_text I get the following value
    BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES);Which is not of course what I was expecting to find in v$sql for the given sql_id.
    After a bit googling I found the following thread on the OTN forum where it had been suggested (well I think maybe not exactly for the same problem) to turn off SERVEROUTPUT.
    Problem with dbms_xplan.display_cursor
    This was precisely what I did
    SET SERVEROUTPUT OFFafter that I repeated the whole procedure and this time everything worked pretty well as expected. I checked SQL*Plus documentation for SERVEROUTPUT
    and also v$session page, yet I didn't find anything indicating that SERVEROUTPUT should be switched off whenever views such as v$sql, v$session
    are queired. I don't really understand the link in terms of impact that one can have on the other or better to say rather, why there is an impact
    Could anyone kindly make some clarification?
    thanks in advance,
    Regards,
    Dariyoosh

    >
    and also v$session page, yet I didn't find anything indicating that SERVEROUTPUT should be switched off whenever views such as v$sql, v$session
    are queired. I don't really understand the link in terms of impact that one can have on the other or better to say rather, why there is an impact
    Hi Dariyoosh,
    SET SERVEROUTPUT ON has the effect of executing dbms_output.get_lines after each and every statement. Not only related to system view.
    Here below what Tom Kyte is explaining in this page:
    Now, sqlplus sees this functionality and says "hey, would not it be nice for me to dump this buffer to screen for the user?". So, they added the SQLPlus command "set serveroutput on" which does two things
    1) it tells SQLPLUS you would like it <b>to execute dbms_output.get_lines after each and every statement</b>. You would like it to do this network rounding after each call. You would like this extra overhead to take place (think of an install script with hundreds/thousands of statements to be executed -- perhaps, just perhaps you don't want this extra call after every call)
    2) SQLPLUS automatically calls the dbms_output API "enable" to turn on the buffering that happens in the package.Regards.
    Al

  • Two questions about the new iWeb

    Hi
    I've got two questions about the new iWeb.
    1. Is it possible to blog online now? Meaning adding a new blog entry without having to be on your own Mac? This is a feature I've been waiting for since iWeb first was released.
    2. Is it possible to choose the format of images? Earlier editions had the option to "optimize" images but that meant it changed it into .png meaning the site got a lot heavier than if .jpg was used. And since I have relatives who still only have an isdn connection I need to be able to have the website as light as possible.
    thanks

    Ah well... thanks for the quick answer
    Message was edited by: Guðlogi

  • I want to question about the official service at the service center of Sony.

    I want to question about the official service at the service center of Sony.
    long since I like the models and items sony. from start playstation, cameras, camcorders up, I've ever had. and a new camera that I bought two years ie compact cameras Sony Cybershot DSC H200. as of a month ago, a camera was having problems in lenses that would not close. and setting the automatic mode to move by itself. I came to the Sony Service Center in Makassar, precisely on Jl. Shop Pengayomann A5 / 05 (0411) 442340.
    operator initially said only two weeks to work on my camera. but this week has been more dau even want to go in a month tomorrow, dated July 9, no news from the service center. and I kept the call to the office service. as well as assorted reasons. there are no spare parts or technical constraints, and the last one I call to his office, he said the factory spare part is damaged. imported directly from Singapore. I think, ko new spare part it can be damaged before using that? how the quality of this Sony spare part? ugly? not good? why?
    I was disappointed with this situation, where soon it will Eid, want to return home as well to Java. but the camera has not been settled workmanship?
    nah, roughly what is the solution of the Sony plagued with this problem? please help, because he did not know to whom to complain. operator had just said: it's up to the father alone.
    once again I asked for his help. solution. if you can before Eid arrived.
    Thank you,
    AD. Rusmianto

    Hi awwee107, 
    Welcome to the Sony Community! 
    We have forwarded your query to the relevant team for their further assistance and someone from local CC will contact you.
    Thanks!
     

  • A question about the SPOOL command in sqlplus

    Dear all,
    I have a question about the SPOOL Command and I would appreciate if you could kindly give me a hand. Consider the following sql script.
    SPOOL result.txt
    SELECT * FROM mytable;
    SPOOL OFF;This works pretty well, and the whole content of the table "mytable" is exported to the text file "result.txt". However, sqlplus prints also the number of lines
    printed after each query. As a result, after running this script, at the end of the file, I have always a line like
    "20541 lines returned"How can I avoid this line (the number of returned lines) in my result file?
    Thanks in advance,
    Dariyoosh

    Peter Gjelstrup wrote:
    Hi Dariyoosh,
    As you are about to find out, SQL*Plus is a really powerful tool once the wonders of it are discovered.
    You really should study the reference
    http://download.oracle.com/docs/cd/E11882_01/server.112/e10823/toc.htm
    In your current case especially the SET command
    http://download.oracle.com/docs/cd/E11882_01/server.112/e10823/ch_twelve040.htm#BACGAJIC
    Regards
    PeterHello there,
    Thank you very much for your attention to my problem and in particular the interesting links.
    Kind Regards,
    Dariyoosh

  • SUP - 2 questions about the CDB (cache database)

    Hi,
    I have 2 questions about the cache database and the cache groups:
    1 - How does the "On demand" cache group policy exactly works? I know that online cache group is without storing any data on the CDB making direct requests to de backend from the device, the DCN is based on updating from the backend, the scheduled is based on a time period, but I don't understand how the "on demand" exactly works, and why it has a time period too.
    2 - Is it possible to query the cache database table to check the data that SUP has stored? How can I do this?
    Thank you!

    I posted a similar question in SUP Apps project not too long ago and  Paul Horan provided this useful reply:
    Create a "Sybase ASA v12.x for Unwired Server" connection profile in the Enterprise Explorer.  I named mine CDB.
    : Host = localhost (or whatever the machine name is)
    : Port = 5200
    : Database name = "default"
    : User Name = "dba"
    : Password = "sql"
    Obviously, change the userid/password to match, if you changed them during install time.
    Connect, and you'll see the "default" database displayed.
    Navigate down through the Tables folder, and the first subfolder is labeled something like [#should_delete_sk ...]  Start there.
    You'll see a bunch of tables with the naming convention "D1" + package name + package version + MBO name.  These are the cache tables for the MBOs.

  • Question about the function module (RFC_READ_TABLE)

    Dear everyone
    Could I ask you a question about the function module (RFC_READ_TABLE)?
    I was asked if it's possible to create a report which compares the data between different SAP systems (both production systems).
    Now, the easiest way would be to use the function module (RFC_READ_TABLE) within a SAP infoset query (SQ01 type query).
    But I heard the rumor that using the function module (RFC_READ_TABLE) is not advisable due to the security reason.
    However, I am not exactly sure what sort of security problems this function module can possibly have...
    Would you help me on this?
    I also would like to know if using "remote enabled module" type function module can always overcome this possible security issue.
    Or, are there any points that I need to be careful about even when I use "remote enabled module" function module?
    Thank you very much in advance.
    Takashi

    Dear Fred-san
    Thank you very much for your support on this.
    But, may I double check about what you mentioned above?
    So, what you were mentioning was that if some user executes the query with
    the function module (RFC_READ_TABLE), under the following conditions, he can access to
    the HR data even when he does not have the authorizations for HR transactions?
    <Conditions>
    1. the user has the authorization for HR database tables themselves
    2. RFC_READ_TABLE is called to retrieve the data from HR database
    <example>
    Data: LF_HR_TABLE like  DD02L-TABNAME value 'PA0000'.
    CALL FUNCTION 'RFC_READ_TABLE'
       EXPORTING
        query_table                = LF_HR_TABLE
      TABLES
       OPTIONS                    =
       fields                     =
       data                       =    .
    But then, as long as we call this function module for a non-critical tables such as
    VBAP (sales order) or EKKO (purchase order) within our query, it wouldn't seem to be
    so security risk to use RFC_READ_TABLE...
    Besides, each query (infoset query) has got the concept of user groups, which limits
    the access to the queries within the user group.
    ※If someone does not belong to the user group, he cannot execute the queries within that
       user group, etc
    So, my feeling is that even infoset queries does have authorization concept...
    Would you give me your thought on this?
    I also thank you for your information for SCU0.
    That is an interesting transaction
    Kind regards,
    Takashi

  • Re: Question about the Satellite P300-18Z

    Hello everyone,
    I have a couple of questions about the Satellite P300-18Z.
    What "video out" does this laptop have? (DVI, s-video or d-sub)
    Can I link the laptop up to a LCD-TV and watch movies on a resolution of 1080p? (full-HD)
    What is the warranty on this laptop?

    Hello
    According the notebook specification Satellite P300-18Z has follow interfaces:
    DVI - No DVI port available
    HDMI - HDMI-out (HDMI out port available)
    Headphone Jack - External Headphone Jack (Stereo) available
    .link - iLink (Firewire) port available
    Line in Jack - No Line in Jack port available
    Line out Jack - No Line Out Jack available
    Microphone Jack - External Micrphone Jack
    TV-out - port available (S-Video port)
    VGA - VGA (External monitor port RGB port)
    Also you can connect it to your LCD TV using HDMI cable.
    Warranty is country specific and clarifies this with your local dealer but I know that all Toshiba products have 1 year standard warranty and also 1 year international warranty. you can of course expand it.

  • Questions about the Apple Developer Enterprise Program

    Hi there,
    i got some questions about the Apple Developer Enterprise Program:
    - is there a way a company can create their own "AppStore" with only the APPs the employees should use?
    - when I developed the enterprise app are the install files on a apple hosted server or do i need my own infrastructure to distribute my app?
    Thanks in advance for answers!

    Google: MDM

  • Some questions about the integration between BIEE and EBS

    Hi, dear,
    I'm a new bie of BIEE. In these days, have a look about BIEE architecture and the BIEE components. In the next project, there are some work about BIEE development based on EBS application. I have some questions about the integration :
    1) generally, is the BIEE database and application server decentralized with EBS database and application? Both BIEE 10g and 11g version can be integrated with EBS R12?
    2) In BIEE administrator tool, the first step is to create physical tables. if the source appliation is EBS, is it still needed to create the physical tables?
    3) if the physical tables creation is needed, how to complete the data transfer from the EBS source tables to BIEE physical tables? which ETL tool is prefer for most developers? warehouse builder or Oracle Data Integration?
    4) During data transfer phase, if there are many many large volume data needs to transfer, how to keep the completeness? for example, it needs to transfer 1 million rows from source database to BIEE physical tables, when 50%is completed, the users try to open the BIEE report, can they see the new 50% data on the reports? is there some transaction control in ETL phase?
    could anyone give some guide for me? I'm very appreciated if you can also give any other information.
    Thanks in advance.

    1) generally, is the BIEE database and application server decentralized with EBS database and application? Both BIEE 10g and 11g version can be integrated with EBS R12?You, shud consider OBI Application here which uses OBIEE as a reporting tool with different pre-built modules. Both 10g & 11g comes with different versions of BI apps which supports sources like Siebel CRM, EBS, Peoplesoft, JD Edwards etc..
    2) In BIEE administrator tool, the first step is to create physical tables. if the source appliation is EBS, is it still needed to create the physical tables?Its independent of any soure. This is OBIEE modeling to create RPD with all the layers. If you build it from scratch then you will require to create all the layers else if BI Apps is used then you will get pre-built RPD along with other pre-built components.
    3) if the physical tables creation is needed, how to complete the data transfer from the EBS source tables to BIEE physical tables? which ETL tool is prefer for most developers? warehouse builder or Oracle Data Integration?BI apps comes with pre-built ETL mapping to use with the tools majorly with Informatica. Only BI Apps 7.9.5.2 comes with ODI but oracle has plans to have only ODI for any further releases.
    4) During data transfer phase, if there are many many large volume data needs to transfer, how to keep the completeness? for example, it needs to transfer 1 million rows from source database to BIEE physical tables, when 50%is completed, the users try to open the BIEE report, can they see the new 50% data on the reports? is there some transaction control in ETL phase?User will still see old data because its good to turn on Cache and purge it after every load.
    Refer..http://www.oracle.com/us/solutions/ent-performance-bi/bi-applications-066544.html
    and many more docs on google
    Hope this helps

  • Question about the sensor... just got my 4s yesterday after screwing up my 3 with the laterd version update.  EVery call I have been on has either changed to speaker, called another number or ended the call or activated facetime, which I have turned off.

    Question about the sensor... just got my 4s yesterday after screwing up my 3 with the laterd version update. EVery call I have been on has either changed to speaker, called another number or ended the call or activated facetime, which I have turned off. never had this trouble with my 3...I don't even want to talk to anyone on this phone! Is the sensor bad? That is what the AT&t rep suggested.

    Restore as new... if the problem still continues then there is a hardware issue.
    If it stops after a restore as new, then the issue is with the backup the device is currently setup with.

  • Question about the DB adapter

    Question about the DB adapter
    ns2006.0.7
    Question:
    It seems that we can only have 1 db adapter, but in the adapter defiition we have to specify the database.  If I want to communicate to several different databases on different platforms, Oracle / SQL I need to install a DB adapter for each database connection. 
    I can't figure out how to add a second DB adapter, does anyone know how?
    Thank you
    Daniel
    Safeway Inc.

    Hi
    We're new at developing our own agents etc. We've leveraged the supplied DB agent and written the relevant adapter, however we're struggling with the transformation.
    Could I be cheeky and ask if anyone would be willing to share a transformation they have written for the DB adapter? Unfortunately no-one in our team has had past XML experience, so we're trying to backwards engineer from examples.
    Thanks, Meghan

  • Question about the ability of Genius Bar to help with outside software...

    Hi I wasn't sure where to put this, but here's my question. I'm having a problem with the Sims 2 on my Macbook and I was wondering if I go to the Genius Bar will they be able to help me or are they strictly hands off when it's not a direct Apple product?
    Also, I bought an external hard drive in the Apple Store but it's not an Apple branded product. If I'm having a problem with that, can I go to the Apple Store?

    If it's a question about the installation, system requirements or compatibility with your MacBook, the Mac Genius can probably help. If you're having some kind of issue while using the Sims 2, you'll probably have to contact .
    The same is true with the hard drive. If you need some help getting it connected or even formatted, they can probably help. However, if it's not powering up or you're losing files or some kind of operational issue with the hard drive after it's connected, they'll probably refer you to the manufacturer.
    Usually Apple can't support other vendors' products. Someone here may be able to give you some more advice, since none of us work for Apple. There may be someone here who's already had the same issue with the Sims.
    -Doug

Maybe you are looking for

  • Time Machine can't see my external hard drive

    Time Machine won't show me any backup disk options other than  "Set up Time Capsule".  When I click on that, it searches for AirPort wireless which I do not have installed.  How do I get Time Machine to see my external hard drive connected via Fire W

  • My key between z and c won't work

    I am getting dings! My, well, key between z and c won't work in either case and dings at me when pressed. My MacBook is fairly new and I have never had any trouble before. Who knew I would need that key so much?

  • 2 Simple FCE Questions

    Hey, I'm kind of a noob at final cut and have 2 questions I think should be relatively simple to answer. 1. How can I make the audio clip stay with the video clip? For example if I move a video clip 5 seconds down the timeline, the audio clip that go

  • Linux(ubuntu), instant client and mono

    hello, I'm trying to build some c# application, that uses oracle instant client. I installed it locally, initialized variables such as $LD_LIBRARY_PATH, $PATH, $ORACLE_BASE, $ORACLE_HOME but when i try to run my mono app i get unhandled exception: Un

  • Bill not paid; what happens to texts?

    So, I've been searching around the interwebs for an answer to this question, but get two answers everytime. I have a late bill ongoing, so Verizon temporarily disconnected it until I pay it in a week.  I don't know if this is considered the same as "