Query to perform like Minus statement

Hi Guys,
I have an issue to ask. I need to extract the final issued parts from a table. Table stores the issue and unissue parts.
e.g lets say 4 items have been issued and 3 items have been un-issued from a table. so in last 1 items should be resulted. while I am executing the following code its not working correctly.
Table data:
REPAIRED_ITEM_ID      ISSUED_REMOVED_ PART_ID     Oper_ID     QTY
122013187                      1323938                                       308             1
122013187                      1323938                                       308             1
122013187                      1323938                                       309             1
122013187                      1323940                                       308             1
122013187                   1323940                                       308             1
122013187                      1323940                                       309             1
122013187                      1323940                                       309             1
SELECT * FROM WC1.ISSUED_REMOVED_ITEM IRI
    WHERE
    IRI.ORDER_ITEM_OPER_ID = 308
    AND IRI.SES_CUSTOMER_ID =1
    AND IRI.REPAIRED_ITEM_ID =  122013187 
    AND 0 = (SELECT COUNT(1)
           FROM WC1.ISSUED_REMOVED_ITEM U
           WHERE U.REPAIRED_ITEM_ID =  IRI.REPAIRED_ITEM_ID
           AND U.ORDER_ITEM_OPER_ID = 309
           AND U.SES_CUSTOMER_ID= 1
           )But the above query is not showing any result, while it should show 1 record of 1323938 , Can anyone please explain me how to get this right result?     
Note: Repair Item id is unique of MAIN UNIT. which can contains multiple issue and un-issue of dependent units.
Many Thanks,
M.C.
Edited by: BluShadow on 06-Sep-2012 13:19
added {noformat}{noformat} tags to help readability.  Please read {message:id=9360002} and learn to post questions with format in future.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

Hi,
Mark Cooper wrote:
Eg. If a unit Serial no '354879019900009' has a part (1015268) issued 8 times and then unissued 4 times so finally the part was issued 4 times. so I need 4 rows to show for each qty 1 for that part and unit serial number. Please find script below.
All the letters have to be lower-case in \ tags.
-- ITL Table
Create table ITL_TEST ( ...Thanks for posting the CREATE TABLE and INSERT statments.
Don't forget to post the exact results you want from the given sample data, and which version of Oracle you're using (e.g. 11.2.0.2.0).
COMMIT;CREATE TABLE ... AS is a DDL command.  DDL commands automatically COMMIT, so you don't need a separate COMMIT statement here.  (It's not doing any harm, of course; I just thought you'd like to know.)
If you need to reference individual rows of the issued_removed_item table, then you might want to change Stew's query to use the analytic SUM function, not the aggregate SUM.  Either way, compute issued_qty in a sub-query, and wait until you're in a super-query to test for issued_qty>0, so you don't have to repeat the DECODE expression.
I'm guessing you want something like this:WITH     got_issued_qty          AS
     SELECT repaired_item_id
     ,     issued_part_id
--     ,     ...               -- any other columns you want
     ,     SUM ( DECODE ( oper_id
               , 308     , +issued_removed_quantity
               , 309     , -issued_removed_quantity
          )     OVER ( PARTITION BY repaired_item_id
               ,     issued_part_id
               )          AS issued_qty
     FROM issued_removed_item
     WHERE     repaired_item_id     IN (122013187)     -- Easy to add its, if needed
,     cntr     AS
     SELECT     LEVEL     AS n
     FROM     (
               SELECT MAX (issued_qty)     AS max_issued_qty
               FROM got_issued_qty
     CONNECT BY     LEVEL     <= max_issued_qty
SELECT     t.item_serial_no, t.item_bcn     -- or t.*, or whatever columns you want
,     q.repaired_item_id, q.issued_part_id     -- or q.*, or whatever columns you want
FROM          itl_test     t
LEFT JOIN     got_issued_qty     q ON q.repaired_item_id     = t.item_id
                    AND     q.issued_qty          > 0
LEFT JOIN     cntr          c ON q.issued_qty          <= c.n
ORDER BY q.repaired_item_id
,      q.issued_part_id
Again, this is just a guess.  Until you post the results you want, all I can do is guess.  Guessing isn't the best way to solve problems.
I displayed only a few columns, just to make the output more readable.  Adding whatever columns you want later will be trivial.
The output I get is 74 rows, starting with:ITEM_SERIAL_NO ITEM_BCN REPAIRED_ITEM_ID ISSUED_PART_ID
354879019900009 BCN141290167 122013187 1015268
354879019900009 BCN141290167 122013187 1015268
354879019900009 BCN141290167 122013187 1015268
354879019900009 BCN141290167 122013187 1015268
354879019900009 BCN141290167 122013187 1015268
354879019900009 BCN141290167 122013187 1015268
354879019900009 BCN141290167 122013187 1015268
354879019900009 BCN141290167 122013187 1015268 ...
For testing and debugging, you may may to change the columns that are displayed.
I don't know if 74 rows is right for this sample data.  I'm certain that it's too many for initial testing.  If you could devise some sample data such that the desired output was only 20 or 25 rows, that would be a lot easier to test and to understand.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Similar Messages

  • Question about MINUS statement

    Hi, just wondering if there is any way of using the MINUS statement and keep any duplicates that are returned from the query much the same way that UNION ALL works compared to UNION.
    Is this possible or I am better to use not exists in a subquery instead?
    thanks for your time,
    Ian

    Probably the not exists doesn't keep duplicates as well.
    I think you could do something like this:
    Message was edited by:
    alessandro.miami
    Processing ...
    select deptno,rowid
    from scott.emp
    Query finished, retrieving results...
      DEPTNO  
            20
            30
            30
            20
            30
            30
            10
            20
            10
            30
            20
            30
            20
            10
    14 row(s) retrieved
    Processing ...
    select deptno,rowid
    from scott.emp
    minus (
         select deptno,max(rowid)
         from scott.emp
         where deptno < 20
         group by deptno
    Query finished, retrieving results...
      DEPTNO  
            10
            10
            20
            20
            20
            20
            20
            30
            30
            30
            30
            30
            30
    13 row(s) retrievedBye Alessandro

  • Hi in my sql query i applied like condition (like '%TEST') but it is taking long time

    Hi in my sql query i applied like condition (like '%TEST') but it is taking long time. I applied indexes also,but still i'm facing the same problem. In my databse nearly 2,00,000 records their.

    Hi Manikandan
    Is there a difference in performance between running the query in BEx and WebI?
    have you aggregates in place on the BEx side of things?
    When you say its taking too long to open the report, have you a variable screen coming up for the report and is that what is taking the time or is it the report execution.
    With regards
    Gill

  • Perform and Form statements

    Hello,
    can anyone give egs of using PERFORM and FORM statement. what do these statements do actually.
    thanks.

    See this sample for PERFORM ... USING...CHANGING
    DATA : c1 TYPE i, c2 TYPE i, res TYPE i.
    c1 = 1.
    c2 = 2.
    <b>PERFORM sum USING c1 c2 CHANGING res.</b>
    WRITE:/ res.
    **& Form sum
    ** text
    form sum using p_c1 p_c2 changing value(p_res).
    p_res = p_c1 + p_c2.
    endform. " sum
    Note the difference between the above and below perform.
    DATA : c1 TYPE i, c2 TYPE i, res TYPE i.
    c1 = 1.
    c2 = 2.
    <b>data: subroutinename(3) VALUE 'SUM'.
    PERFORM (subroutinename) IN PROGRAM Y_JJTEST1 USING c1 c2 CHANGING res</b>.
    WRITE:/ res.
    **& Form sum
    text
    form sum using p_c1 p_c2 changing value(p_res).
    p_res = p_c1 + p_c2.
    endform. " sum
    ANother sample for simple perform
    PERFORM HELP_ME.
    FORM HELP_ME.
    ENDFORM.
    <b>... TABLES itab1 itab2 ...</b>
    TYPES: BEGIN OF ITAB_TYPE,
             TEXT(50),
             NUMBER TYPE I,
           END OF ITAB_TYPE.
    DATA:  ITAB TYPE STANDARD TABLE OF ITAB_TYPE WITH
                     NON-UNIQUE DEFAULT KEY INITIAL SIZE 100,
           BEGIN OF ITAB_LINE,
             TEXT(50),
             NUMBER TYPE I,
           END OF ITAB_LINE,
           STRUC like T005T.
    PERFORM DISPLAY TABLES ITAB
                    USING  STRUC.
    FORM DISPLAY TABLES PAR_ITAB STRUCTURE ITAB_LINE
                 USING  PAR      like      T005T.
      DATA: LOC_COMPARE LIKE PAR_ITAB-TEXT.
      WRITE: / PAR-LAND1, PAR-LANDX.
      LOOP AT PAR_ITAB WHERE TEXT = LOC_COMPARE.
      ENDLOOP.
    ENDFORM.
    Hope this helps.
    Reward points if this helps u.

  • Urgent query regarding performance

    hi
    i have one query regarding performance.iam using interactive reporting and workspace.
    i have all the linsence server,shared services,and Bi services and ui services and oracle9i which has metadata installed in one system(one server).data base which stores relationaldata(DB2) on another system.(i.e 2 systems in total).
    in order to increase performance i made some adjustments
    i installed hyperion BI server services, UI services,license server and shared services server such that all web applications (that used web sphere 5.1) such as shared services and UI services in server1(or computer1).and remaining linsence and bi server services in computer2 and i installed database(db2) on another computer3.(i.e 3 systems in total)
    my query : oracle 9i which has metadata where to install that in ( computer 1 or in computer 2 )
    i want to get best performance.where to install that oracle 9i which has metadata stored in it.
    for any queries please reply mail
    [email protected]
    9930120470

    You should know that executing a query is always slower the first time. Then Oracle can optimise your query and store it temporary for further executions. But passing from 3 minutes to 3 seconds, maybe your original query is really, really slow. Most of the times I only win few milliseconds. If Oracle is able to optimize it to 3 seconds. You must clearly rewrite your query.
    Things you should know to enhance your execution time : try to reduce the number of nested loops, nested loops give your an exponential execution time which is really slow :
    for rec1 in (select a from b) loop
      for  rec2 in (select c from d) loop
      end loop;
    end loop;Anything like that is bad.
    Try to avoid Cartesian products by writing the best where clause possible.
    select a.a,
             b.b
    from  a,
            b
    where b.b > 1This is bad and slow.

  • Trying to find replacement battery thats performs like the original

    I have a aluminum unibody 15" MacBook Pro, it's about year and a half old. The battery isn't performing like it used too, and I'm looking to replace it. The only problem is that every review I read says the the replacement battery is horrible compared to the original, even from the apple website and OEM. Does anyone know where I can find a replacement battery that will perform the the original battery came with it?

    I think that your best option is to get one from Apple even though it will be the most expensive.  I have seen no statements in regards to the quality of them, but I have no reason to believe any one might have better ones.
    I have seen that through Amazon one can order OEM batteries for Apple MBPs and at more favorable prices. They look 'real' and I suspect that they are gray market items.  Nevertheless Caveat Emptor.
    iFixit.com also sells MBP batteries, but I don't know if they have any in stock.
    I certainly would avoid anything offered on e-bay in the bargain category.
    If you do choose to go third party, note that you will need a #0 Triwing driver for the battery swap.
    Ciao.

  • SQl query to remove all dbms_output statement

    Hi
    Can u please tell me Single SQl query to remove all dbms_output statement from package and procedure
    Umesh

    >
    Can u please tell me Single SQl query to remove all
    dbms_output statement from package and procedure
    If you are comfortable with scripting languages like Perl, Python, Ruby etc., then removing lines having the dbms_output statements from your files should be a trivial matter.
    pratz

  • How do I make my old user account perform like the new one?

    I have a macbook pro 13" 2008 with 8 gb of ram that has been showing stuttering in hd video lately (or maybe when i upgraded to lion). I created another user and it doesn't seem to suffer any stuttering. Perhaps it doesn't have weird things running the background like the 655mb kernal task I see in activity monitor. My question is, how do I make my old user account perform like the new one? I've tried to turn off things running the background, but it doesn't seem to do anything. Thanks!

    How to Transfer Everything from an Old iPad to New iPad
    http://osxdaily.com/2012/03/16/transfer-old-ipad-to-new-ipad/
    http://ipad.about.com/od/iPad_Guide/ss/How-To-Wipe-Your-iPad-And-Erase-Data.htm
     Cheers, Tom

  • Asset query execution performance after upgrade from 4.6C to ECC 6.0+EHP4

    Hi,guys
    I am encounted a weird problems about asset query execution performance after upgrade to ECC 6.0.
    Our client had migrated sap system from 4.6c to ECC 6.0. We test all transaction code and related stand report and query.
    Everything is working normally except this asset depreciation query report. It is created based on ANLP, ANLZ, ANLA, ANLB, ANLC table; there is also some ABAP code for additional field.
    This report execution costed about 6 minutes in 4.6C system; however it will take 25 minutes in ECC 6.0 with same selection parameter.
    At first, I am trying to find some difference in table index ,structure between 4.6c and ECC 6.0,but there is no difference about it.
    i am wondering why the other query reports is running normally but only this report running with too long time execution dump messages even though we do not make any changes for it.
    your reply is very appreciated
    Regards
    Brian

    Thanks for your replies.
    I check these notes, unfortunately it is different our situation.
    Our situation is all standard asset report and query (sq01) is running normally except this query report.
    I executed se30 for this query (SQ01) at both 4.6C and ECC 6.0.
    I find there is some difference in select sequence logic even though same query without any changes.
    I list there for your reference.
    4.6C
    AQA0FI==========S2============
    Open Cursor ANLP                                    38,702  39,329,356  = 39,329,356      34.6     AQA0FI==========S2============   DB     Opens
    Fetch ANLP                                         292,177  30,378,351  = 30,378,351      26.7    26.7  AQA0FI==========S2============   DB     OpenS
    Select Single ANLC                                  15,012  19,965,172  = 19,965,172      17.5    17.5  AQA0FI==========S2============   DB     OpenS
    Select Single ANLA                                  13,721  11,754,305  = 11,754,305      10.3    10.3  AQA0FI==========S2============   DB     OpenS
    Select Single ANLZ                                   3,753   3,259,308  =  3,259,308       2.9     2.9  AQA0FI==========S2============   DB     OpenS
    Select Single ANLB                                   3,753   3,069,119  =  3,069,119       2.7     2.7  AQA0FI==========S2============   DB     OpenS
    ECC 6.0
    Perform FUNKTION_AUSFUEHREN     2     358,620,931          355
    Perform COMMAND_QSUB     1     358,620,062          68
    Call Func. RSAQ_SUBMIT_QUERY_REPORT     1     358,569,656          88
    Program AQIWFI==========S2============     2     358,558,488          1,350
    Select Single ANLA     160,306     75,576,052     =     75,576,052
    Open Cursor ANLP     71,136     42,096,314     =     42,096,314
    Select Single ANLC     71,134     38,799,393     =     38,799,393
    Select Single ANLB     61,888     26,007,721     =     26,007,721
    Select Single ANLZ     61,888     24,072,111     =     24,072,111
    Fetch ANLP     234,524     13,510,646     =     13,510,646
    Close Cursor ANLP     71,136     2,017,654     =     2,017,654
    We can see first open cursor ANLP ,fetch ANLP then select ANLC,ANLA,ANLZ,ANLB at 4.C.
    But it changed to first select ANLA,and open cursor ANLP,then select  ANLC,ANLB,ANLZ,at last fetch ANLP.
    Probably,it is the real reason why it is running long time in ECC 6.0.
    Is there any changes for query selcection logic(table join function) in ECC 6.0.

  • How can I evaluate the count of a query I'd like to execute with a map...

    Hi. I have a problem with a query...
    I hava created a query which a execute with a Map (I use the
    executeWithMap(Map) method). The problem is that sometimes this query
    returns a large resultset. So, I would like to execute an other query
    (called query_count) before executing ther final query with the Map. If
    the query_count returns a count < 200, I execute the final query. How can
    I do ? There is an example I have read this in the documentation :
    Query query = pm.newQuery (Magazine.class, "price < 5");
    query.setResult ("count(this)");
    Long count = (Long) query.execute ();
    The problem in this example is that the query is not execute with a Map.
    So my question is : "How can we do the evalute the count of a query we
    would like to execute with a map ?". Thank you for any response.

    Hi John,
    You should be able to executeWithMap that query, too. Is that giving you
    problems?
    Note that there may be an easier solution. What do you do if there are more
    than 200 results? If, e.g., you just get the first N, then one option is to
    set the FetchBatchSize on the query to N (thus activating large result set
    support), and then call size () on the resulting Collection. This will
    issue a SELECT COUNT(*) to the database to determine the size automatically.
    Thanks,
    Greg
    "John" <[email protected]> wrote in message
    news:ctq9a8$4gr$[email protected]..
    >
    Hi. I have a problem with a query...
    I hava created a query which a execute with a Map (I use the
    executeWithMap(Map) method). The problem is that sometimes this query
    returns a large resultset. So, I would like to execute an other query
    (called query_count) before executing ther final query with the Map. If
    the query_count returns a count < 200, I execute the final query. How can
    I do ? There is an example I have read this in the documentation :
    Query query = pm.newQuery (Magazine.class, "price < 5");
    query.setResult ("count(this)");
    Long count = (Long) query.execute ();
    The problem in this example is that the query is not execute with a Map.
    So my question is : "How can we do the evalute the count of a query we
    would like to execute with a map ?". Thank you for any response.

  • How to improve performance of insert statement

    Hi all,
    How to improve performance of insert statement
    I am inserting 1lac records into table it takes around 20 min..
    Plz help.
    Thanx In Advance.

    I tried :
    SQL> create table test as select * from dba_objects;
    Table created.
    SQL> delete from test;
    3635 rows deleted.
    SQL> commit;
    Commit complete.
    SQL> select count(*) from dba_extents where segment_name='TEST';
    COUNT(*)
    4
    SQL> insert /*+ APPEND */ into test select * from dba_objects;
    3635 rows created.
    SQL> commit;
    Commit complete.
    SQL> select count(*) from dba_extents where segment_name='TEST';
    COUNT(*)
    6
    Cheers, Bhupinder

  • Does 'For All Entries in itab' work exactly like 'Join' statement?

    Hi,
    I would like to know that if 'For All Entries in itab' work exactly like 'Join' statement?
    If yes, then when I use 'For All Entries in itab' and a 'Join' statement seperately with the same logical conditions for both, the number of records returned by the two methods are not same. Ideally, they should both return the same number of recs.
    Can somebody help?
    With regards.

    Hi,
    for all entries will not work in the same way unless untill it should satisfy some conditions,
    it has some pre-requisests...
    like in the select clause or in where clause or in both the cluases, there should be entire key..
    then only it will behave like the join statement..
    hope i am clear.
    please revert back if u have any quiries.
    Regards,
    Sunil Kumar Mutyala.

  • Increase Query Designer Performance?

    Hi together,
    Is it possible to increase the performance of the Query Designer? I have a query which is based on a MultiProvider with several InfoCubes. If i work some minutes with the Query Designer, it becomes slower and slower. Then it takes from one up to ten seconds to see the digit in my formula, which i pressed on the keyboard just before. This is very awful! At least the half of the time i work with the Query Designer i have to wait, that this tool stops calculating something or displaying the hourglass. Very inefficient.
    Thanks for hints in advance!

    >
    Ricardo Rosa wrote:
    > Hi Timo,
    >
    > Usually this kind of issue should be solved with frontend patch upgrade, have you tried to reproduce this with latest FEP?
    >
    > Other suggestion which should help is to search for RSZ tables inconsistencies with report ANALYZE_RSZ_TABLES, this can search for inconsistencies which can decrease the performance in the query definition and also suggest a fix for that.
    >
    > Kind regards,
    >
    > Ricardo
    Hi Ricardo,
    Yes, i've already upgraded to latest FEP version.
    But thank you for this report. It looks very helpful!
    >
    Arun Varadarajan wrote:
    > Shikha,
    > The question was regarding Query designer performance and not query performance...
    >
    > We faced similar issue earlier - even opening of Query designer took a huge amount of time but then on upgrading upgrading my system to 1 GB ram most of the performance issue went away... check the RAM usage in the task manager and CPU utilization. This might make query designer faster.
    Hi Arun,
    At this point there is no more hope for me, as my system has already 2 GB RAM and a dual core cpu with 2.2 GHz.

  • HT1430 I do a hard reset of my iPhone every so often because I feel it improves performance, but Apple states 'Reset ONLY if the device is not responding'. Forget about my OCD on this, but is their any reason my Apple stresses the ONLY DO THIS if not resp

    I do a hard reset of my iPhone every so often because I feel it improves performance, but Apple states 'Reset ONLY if the device is not responding'. Forget about my OCD on this, but is their any reason my Apple stresses the ONLY DO THIS if not responding?

    deggie wrote:
    Because it is more far-reaching than just turning the phone off and back on, it completely takes out anything in volatile memory, etc. I do reset mine if I notice problems such as no email coming in, text message issues, etc. but the same thing could probably be accomplished by just turning the phone off and on in your case.
    Thanks deggie, I think I get it now. Volatile memory (I had to look it up!) is memory that is lost on loss of power, so it's as you say... I'm not achieving anything on a normally functioning iPhone as on-off achieves the same result!... Thanks

  • How to deal with query that works like dir or ls

    I just can't figure it out how to deal with query that works like dir (Win32) or ls(unix) with special character like '*' and '?' , ex. c:\> dir j*.t?t
    Could somebody please tell me?

    Here's some code for using a FileFilter for the listFiles() method of the File class:
    import java.io.*;
        FileFilter myFileFilter = new MyFileFilter("*.t?t");
        File[] files = File.listFiles(myFileFilter);
      class MyFileFilter implements FileFilter {
            private String pattern;
            public MyFileFilter(String pattern) {
                this.pattern = pattern;
            public boolean accept(File file) {
                if (file.isDirectory()) {
                    return false;
                if (file.getName() ??matches?? pattern) {
                    // you'll have to put the pattern match code here!!
                    return true;
                return false;
      }maybe someone else can help with the pattern matching!

Maybe you are looking for

  • How do i set up a pop email account

    How do I set up an email account on my iphone with the ending fsnet.co.uk using POp It is asking for a host name for ingoing and out going mail servers??? HELP!!!

  • Printing to shared windows xp printer from 10.5.8

    I AM TRYING TO PRINT TO A SHARED WINDOWS XP PRINTER FROM 10.5.8 AND HAVE TO RE AUTHENTICATE THE USER AND PASSWORD EACH TIME I PRINT. THIS GETS TO BE TIRESOME WHEN OU PRINT 50 TIMES A DAY. TRIED DELETING THE KEYCHAIN, NO HELP. HAVE SET ALL PERMISSIONS

  • Unlocked iphone 5 not working properly

    I bought a brand new unlocked iphone 5 on ebay off of a seller (a legit seller). I got my phone today, and started to set my phone up and then I noticed the camera would not work. When I open the camera app, it is just a blank screen. SOMETIMES, the

  • Finite pulse train with different frequency

    Hello, I'm trying to modify the labview example "generate finite pulse train" to generate a finite pluse train with different frequencies. Each freq will run 400 pulses. The freqs are stored in an array, which is being fed into a for loop. Thus ramp

  • Is it possible to have 2 hard drives, one base stock model, and one new ssd in the dv8t-1100?

    I am currently thinking of upgrading my laptop.  I have an old DirectX 10 card and no SSD.  I was wondering if it's possible to upgrade those items on the HP Pavilion dv8t-1100.  Preferably, since this is an 18.4'' notebook, I would like to keep the