Need clarifications on LocalTransaction in adapters..

I'm using WLI 2.1.
I want to know how to use LocalTransactions in an adapter by implementing
spi.AbstractLocalTransaction?
How and where should I call Connection.getLocalTransaction() ? Can I call
them in InteractionImpl.execute()?
Does the weblogic server handle transactions or do we need to do the
Connection.getLocalTransaction() and
LocalTransaction.begin(),etc.?
Also, how is the LocalTransaction associated with the Connection and
ManagedConnection?
Thnx in advance

Suchitra,
Note 309955 has a complete explnation of statistics - also go to table RSDDSTAT and you can find the fields you need and search for the same and you will get the data you need.
As for reports-  check under technical content - you can find the queries - you need not activate - look in the metadata repository.
And as for dashboards for the same - there are a lot of templates available for the same as part of tecvhnical content for BI.
Arun
P.S what version are you on ? the technical content has been vastly improved in BI7.0
Hope it helps..

Similar Messages

  • Need Clarification On Internal tables in Start Routine

    Hi,
    I have intenal table some IT_A which deletes the requests which are populated in to it.
    Now I need to populate requests into IT_A from  another internal table like IT_B which are of different structure.
    I have three fields in common in both internal tables like RNR,Timestamp n SID with different field names
    I Need Clarification that if i move the contents of the three fields to IT_A from IT_B by spcifying these three fields.Will the internal table IT_A deletes the Requests?
    Thanks,
    Sriram.

    Hi Sriram,
    As mentioned IT_A deleted the request loaded into it.
    if you move the contents of the three fields to IT_A from IT_B by spcifying the three fields after the deletion step for internal table IT_A takes place then it wont get deleted.
    But it would be deleted if the the contents are moved before the deletion step takes place
    regards,
    mahesh

  • Need clarification on issue with tablespaces

    Hi All,
    I needed clarification on this issue with tablespaces.I am following a guideline which says "Presently,The general rule of thumb is to concentrate on tablespaces that are 90% or greater used and have less than 2 GB of free space"
    My Scenario:-Now i have tablesace which is 97.23 % and the free space is 10 gb.Please advise me that whats the best next step I can take in this respect?
    I am very new to this and just following the guideline,so can you all please explain me some details about this.
    Thanks

    Well, the guideline says "*and* have less than 2 GB of free space"
    Since that tablespace as 10GB of free space, the guideline says that you do not need to concentrate on it. Eh ?
    I would be careful with guidelines that are hard-coded in this manner. You need to know the context in which the guideline was set.
    Assuming that your database/application doesn't grow suddenly by 1GB or larger (or create temporary segments / objects of 1GB or larger), the "2 GB of free space" might make sense. Then, again, it depends on the extent allocation type. What size are extents getting allocated.
    You should check with your organisation's senior DBAs on what happens if you don't concentrate on a tablespace because it is outside the guidelines !

  • Need clarification on Bigfile Tablespaces

    In the following Oracle Documentation Lirbary PDF,
        Oracle® Database
        Concepts
        10g Release 2 (10.2)
        B14220-02
        October 2005
    section
        Overview of Tablespaces
        Bigfile Tablespaces (page: 3-5)
    it says,
        Benefits of Bigfile Tablespaces
        * Bigfile tablespaces can significantly increase the storage capacity of an Oracle database. Smallfile tablespaces can contain up to 1024 files, but bigfile tablespaces contain only one file that can be 1024 times larger than a smallfile tablespace. The total tablespace capacity is the same for smallfile tablespaces and bigfile tablespaces. However, because there is a limit of 64K database for each database, a database can contain 1024 times more bigfile tablespaces than smallfile tablespaces, so bigfile tablespaces increase the total database capacity by 3 orders of magnitude. In other words, 8 exabytes is the maximum size of the Oracle database when bigfile tablespaces are used with the maximum block size (32k).
    I need clarification on how to arrive at 8 exabytes ?
    1024 x 32k x 64,000 ??
    According to the exerpt above, there's no mention of maximum number of Operating System blocks per extent. Unless this was assumed knowledge ... how do I get 8 exabytes ?
    And if "a database can contain 1024 times more bigfile tablespaces than smallfile tablespaces", then what's the upper limit on smallfile tablespaces ? -- was this sentence referring to the number of datafiles per smallfile tablespace ? ...
    O_o
    Thanks !
    Message was edited by:
    mvanle

    Hi,
    According to [url http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14237/limits002.htm#i287915]Physical Database Limits page, a bigfile tablespace contains only one datafile or tempfile, which can contain up to approximately 4 billion ( 232 ) blocks. The maximum size of the single datafile or tempfile is 128 terabytes (TB) for a tablespace with 32K blocks and 32TB for a tablespace with 8K blocks. In resume, A bigfile tablespace is a tablespace containing a single datafile that can be as large as 128 terabytes (TB), depending on the block size of the tablespace. In conjunction with setting the initialization parameter DB_FILES to the maximum value of 65,635 the total size of the database can be more than 8 exabytes (EB)
    >>how do I get 8 exabytes ?
    You can calculate the maximum amount of space (M) in a single Oracle database as the maximum number of datafiles (D) multiplied by the maximum number of blocks per datafile (F) multiplied by the tablespace block size (B):
    M = D * F * B. Therefore, the maximum database size, given the maximum block size and the maximum number of datafiles, is:
    65,535 datafiles * 4,294,967,296 blocks per datafile * 32,768 block size = 9,223,231,299,366,420,480 = 8EB.Cheers
    Legatti

  • BPM, Workflow and Netweaver - Need Clarification

    Hi Guru,
    I am new to workflow, BPM and Netweaver.  I have several questions about those concepts.
    1. What is/are the different between workflow and webflow.  Which scenario should take workflow into consideration? Which scenario webflow can be applied to?
    2. What is/are the different between workflow and BPM.  If I am going to implement workflow in a company, do I need to implement business workflow as well as BPM?
    3. I need clarification on Netweaver platform and concept.
    4. What is/are the different between workflow in R/3 and workflow under Netweaver?
    5. If I am going to implement workflow integrated with R/3 and Outlook email, do I need to buy new wrokflow for Netweaver and Netweaver platform or alternatively, I can use business workflow module under R/3 system?
    Sorry for many questions asked. I am new to those products.  I am now working on software selection for workflow technology.  My company is going to implement new workflow to client.  Thank you very much.
    Cheers,

    Please ask only one (or closely related) questions per thread. This makes it a lot easier to get a good structure in the database of previously answered questions. While we are on the subject of previously answered questions, I think you should have a look at them....
    My suggestion is therefore:
    1. Close this thread
    2. Read the Frequently Answered Questions. Before you ask (here are many workflow answers)
    3. Search the forum.
    4. Create new threads if you have questions afterwards.

  • Need clarification for these names, R/3, WAS, NetWeaver

    Hi All
    I posted the question on WAS preview installation, then I realized I should have put it here. Here is the link to that post:
    Need clarification for these names, R/3, WAS, NetWeaver
    Thanks again for any input
    Xueqing
    Added the link to the first post
    Message was edited by: xueqing Han

    Hi Xueqing,
    First of all, sorry for any confusion that we have caused you. I hope I can give you an answer that will clear up the confusion. Sorry, but it is a long explanation of the development of two application solution but history tends to be very long winded!
    It is <b>not</b> true SAP R/3 Enterprise equals SAP Web AS, and I'll hopefully explain why:
    In the beginning (at least in the client server world) SAP only ship SAP R/3. The technology layer under SAP R/3 was called SAP Basis. There was only SAP Basis under SAP R/3.
    SAP then started to deliver other software solutions that were not included in or built on the SAP R/3 software. These included SAP Business Information Warehouse, SAP APO, SAP SEM, SAP CRM, and the list goes on.
    These solutions needed to run on technology layer (like SAP R/3 did). SAP Basis was the obvious choice for this because of the common technology layer providing DB/OS abstraction and a coding environment.
    <b>~2001:</b>
    Later there came the need to have SAP Basis support the web and its web standards and other programming languages. This radically changed what SAP Basis was and we decided to rename the new technology layer to SAP Web Application Server (SAP Web AS). So the SAP Basis name was retired and SAP Basis 4.6D was the last release called SAP Basis.
    The new release and the "technology change" means that all the applications mentioned above now ran on SAP Web AS. The first release of SAP Web AS was 6.10.
    <b>~2002:</b>
    SAP Web AS was first used in SAP R/3 Enterprise release .
    The important fact is that SAP R/3 Enterprise runs on SAP Web AS. SAP R/3 Enterprise = SAP R/3 business applications + new business functions + SAP Web AS.
    <b>~2003:</b>
    I hope the above explanation is clear, because technology takes another major change. It was realized that SAP now had a collection of business solutions/applications (SAP R/3, mySAP CRM, mySAP SCM, etc) and a collection of technology solutions (SAP Web AS, SAP BW, SAP EP, etc). The technology requirements for the business solutions did not end at SAP Web AS, they needed portal, data warehouse, knowledge management capabilities, etc to develop business solutions on.
    To address this SAP made the decision to combine all the technology solutions and tools into one single platform. This made complete sense for developers (SAP, Partners and customers). This single platform is called SAP NetWeaver. It includes all the old individual components of SAP Web AS, SAP BW, SAP KM, SAP EP, etc).
    <b>~2004:</b>
    I think you can guess what the next step is. Yes, the new release of SAP R/3. Since the release of SAP R/3 Enterprise and the release of SAP NetWeaver, SAP R/3's name changes. It is now called mySAP ERP as it includes a lot of applications that were previously sold separately (like SAP SEM, MSS, ESS, etc).
    So now mySAP ERP runs on SAP NetWeaver (yes everything that was in SAP Basis and then in SAP Web AS is still there but SAP NetWeaver has so much more now).
    Also with the release of SAP NetWeaver, SAP starts to stop using the old technology component names - you will not hear us talk about SAP Web AS, SAP BW, etc anymore, just SAP NetWeaver releases.
    In addition all the other SAP business applications also run on SAP NetWeaver, so the latest version of mySAP CRM, mySAP SRM, and mySAP SCM all run on SAP NetWeaver.
    So to simplify the explanation, it would be :
    <b>Evolution of SAP R/3:</b>
    SAP R/3 (up to release 4.6c) -> SAP R/3 Enterprise (releases 1.00 through 2.00 -> mySAP ERP (2004 onward)
    <b>Evolution of Basis:</b>
    SAP Basis (up to release 4.6d) -> SAP Web AS (up to release 6.40 which was included in SAP NetWeaver '04) -> SAP NetWeaver (2004 onward)
    SAP NetWeaver and mySAP ERP have <u>their own release cycles</u>. mySAP ERP always has an underlying technology release that it is built on (this is a SAP NetWeaver release)
    I hope this helps,
    Mike.
    <b></b>

  • Need clarification regarding the test cable-diagnostics tdr command

    Hello,
    I've read about the test cable-diagnostics tdr command but I need clarification on the examples listed below to make sure that I am providing the right answer to my co-workers.
    Example 1:
    CXXX7SW17#show cable-diagnostic tdr int g0/20
    TDR test last run on: July 16 10:23:00
    Interface Speed Local pair Pair length        Remote pair  Pair status
    Gi0/20    auto  Pair A     N/A                        N/A                Normal
                           Pair B     72   +/- 10 meters  N/A                Open
                           Pair C     75   +/- 10 meters  N/A                Short/Crosstalk 
                           Pair D     74   +/- 10 meters  N/A                Short/Crosstalk
    Does this example mean that there's a cable length issue in the line which is causing the device that it's connected to not to work properly?
    Example 2:
    CXXX2SW140#show cable-diagnostics tdr int g0/21
    TDR test last run on: July 16 09:16:22
    Interface Speed Local pair Pair length        Remote pair Pair status
    Gi0/21    100M  Pair A     N/A                      Pair A           Normal
                              Pair B     N/A                      Pair B           Normal
                              Pair C     N/A                     Pair C           Normal
                              Pair D     N/A                     Pair D           Normal
    Does this example state that the cable line is okay for use?
    Example 3:
    CXXX1SW19#show cable-diagnostics tdr int g0/22
    TDR test last run on: July 16 06:36:53
    Interface Speed Local pair Pair length            Remote pair Pair status
    Gi0/22    auto    Pair A       1    +/- 10 meters  N/A               Open
                              Pair B     39   +/- 10 meters   N/A               Open
                              Pair C     72   +/- 10 meters   N/A               Open
                              Pair D     1    +/- 10 meters    N/A               Open
    Does this example mean that there isn't a device connected on the other end? No pin-contact?
    Thank you very much for any help you could provide.
    S

    I found this article here at supportforums that seemed like the best explanation I've read so far for TDR info.
    Hope that helps.

  • ZCM 11 SP3 and Windows 8.1, need clarification...

    I dont know if I can't read, but I need som clarification...
    When SP3 arrived, there were problems with Win 8.1.
    I have customers that have machines, that were upgraded to 8.1 and it have worked with the already installed 11.2.3a -agent, even if its not supported. They deployed ZCM 11.3-agent to this machines that were upgraded to 8.1, and they all crached, never started again. I made tests in my test-environment, same result.
    Last week I started to check and found the update "ZENworks 11.3.0a Windows 8.1 Update" (https://www.novell.com/support/kb/doc.php?id=7014805 / http://download.novell.com/Download?...d=0yMdXrTonF8~). Downloaded that yesterday, and was to test it today. Looked for the instructions again today, and now its not availble anymore, Obsolete. I have not deplyed it yet, so that is no problem.
    There are since yesterday a new update, "ZCM 11.3.0 WIN8.1 Patch 866736" (http://download.novell.com/Download?...d=OvBLs9qZhrU~).
    But in the instructions on this, it talks about it talks about how to update machines already updated. Here the text mentions "11.3.0_WIN8.1". What is that? Is it the now Obsolete "ZENworks 11.3.0a Windows 8.1 Update"? Or machined patched with the standard "Update for ZENworks (11 SP3)", that was created during SP3 install? If reffering to the last, it cant be done because in both my customers production environment and my testenvironment the Win 8.1 machined crached and never came up.
    Next two methods in the description describes how to update "Windows 8.1 Update for ZENworks (11 SP3)", if already imported (zman supf "Windows 8.1 Update for ZENworks (11 SP3)" ZCM_11.3.0_WIN8.1_20140404_866736.zip).
    But for those who have not imported that, what is the way to go? You cant download "Windows 8.1 Update for ZENworks (11 SP3)" anymore, and the patch seems to be for that one?
    What is the way to get Win 8.1 to works with ZCM 11.3?
    Can anyone clarify this?
    I dont know if this belongs in agent forum or server-forum, but I start here.
    /Stefan

    CRAIGDWILSON wrote:
    > New Versions of those patches are being rolled out.
    >
    > Normally it happens at the same time, but looks like timing was
    > slightly off.
    >
    >
    >
    > On 4/30/2014 11:31 AM, Niels Poulsen wrote:
    > > stesjo wrote:
    > >
    > > >
    > > > So, at this point, it means that there is no way to get ZCM 11.3
    > > > to work with Windows 8.1?
    > > >
    > > > Old patches removed, and just patches for the removed patches
    > > > available?
    > > >
    > > > One can assume, that there is a reasom why the 8.1 Update is
    > > > removed, and made obsolete?
    > > > /Stefan
    > >
    > > ... One would think so, yes. Not sure what's the reason...
    > >
    Cool :-)
    Niels
    A true red devil...
    If you find this post helpful, please show your appreciation by
    clicking on the star below
    A member must be logged in before s/he can assign reputation points.

  • B2B Inbound Error : Need Clarification

    Hi All,
    I would like to illustrate a particular use case for B2B Inbound Errors and if there is any workaround available to get over the problem. We are using B2B version 10.1.2
    Use Case Details</strong1. Consider the scenario wherein we have a Inbound EDI File which has 100 transactions in it. B2B reads the file and automatically debatches into individual transactions and processes them separately.
    2. Out of the 100 transactions in the input file, if say 5 transactions are error transactions, we have observed that all the 100 transactions error out in B2B.
    3. The 95 correct transactions fail with the error "General Validation Error" and the 5 error transactions have the exact error details in the B2B Message Reports.
    4. Ideally, B2B should error out only the 5 error transactions and process rest of the 95 transactions without allowing them to fail with "General Validation Error".
    I am sure that most of us must have faced this error. Could someone please let me know the folllowing:
    1. Is there any setting in B2B by which we can enable it to error out only the transactions which have valid errors in a single file
    2. Is there any other workaround that we can take to avoid this issue.
    The reason we need to have a solution for this is because in our Production B2B environment we receive inbound files from Trading Partners which have hundreds of transactions in each file.
    If, even a single transaction has error, then the whole file errors out and it is quite cumbersome to browse through the B2B Message Reports to get the exact error transaction because all the correct transactions would failed with error "General Validation Error".
    Please let me know the inputs.
    Thanks,
    Dibya

    Hi Ramesh,
    Thanks for the clarification and information. We have already set the parameter OneErrorAllError = true and my understanding was that if we set this parameter to TRUE, it means that if one transaction in the OUTBOUND batch errors out then the whole batch will error out.
    I was not aware that it also holds true for INBOUND. Is there any other way that we can set this parameter based on direction (INBOUND or OUTBOUND).
    We would like to have this parameter set for OUTBOUND and disabled for INBOUND Transactions.
    Please let me know. Thanks Again.
    Regards,
    Dibya

  • Need clarification regarding select query

    Hi,
    I need a little clarification regrding a Select senario
    I want to select data from table which have been minupulated between a certian date like between 1-DEC-10 to 31-DEC-10 and note that table does not have any time/date column. I've applied the following query to do this.
    select * from TABLE_NAME where sysdate between to_date('01-DEC-10') AND to_date('31-DEC-10');
    Would it work fine because I've tried it against a table and it returned me nothing however DML occur between time period.
    Regards,
    Abbasi

    Abbasi wrote:
    Hi,
    I need a little clarification regrding a Select senario
    I want to select data from table which have been minupulated between a certian date like between 1-DEC-10 to 31-DEC-10 and note that table does not have any time/date column. I've applied the following query to do this.
    select * from TABLE_NAME where sysdate between to_date('01-DEC-10') AND to_date('31-DEC-10');
    Would it work fine because I've tried it against a table and it returned me nothing however DML occur between time period.
    Regards,
    AbbasiAFAIK without log mining and auditing this is not possible.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/logminer.htm

  • Unchecking Print Resolution - need clarification, need Jeff Schewe

    Seeking clarification on using the print module in LR. Have read several threads in which Jeff Schewe (Jeff, are you there?)suggests that you use your file as a master file in LR and that if you uncheck the print resolution box, you can print multiple sizes of an image by merely changing the print size in LR, provided you uncheck the print resolution box.
    Want to make sure that I've got this workflow right. You do your edits in LR and process your raw file. Then bring the file into CS3 for whatever other tweaks are needed. Add a layer and do your output sharpening on this layer, but don't go up to the Image, Image Size command. Do your sharpening without resizing or resampling.
    My RAW files come into CS3 with a resolution of 240 pixels/inch. This is the native resolution?? After sharpening in CS3 (without resizing or resampling), bring the file back into LR. In the print module, uncheck print resolution (LR shows the resolution as being 240 ppi)and enter the height and width of the print size (not the paper size, but the print size) in the cell size. Go into page setup (Windows) which will take you into your print driver. On my Epson r2400 this is where I choose sheet vs roll, paper type, print quality, click ICM, then click off. Ok my way out of the Epson driver back to LR. On the white page layout in the center of the print module, LR will give me a new ppi reading. In my workflow, I typically print at 8x12 inches, so now LR is telling me that it is at 275 ppi. Pick profile and rendering intent in color management. As long as the read out for my print size in the white center section of lightroom reads between 180-480ppi, then at this point I am good to go and click print.
    Have I got it? Am I missing something? Please, enlighten me.
    Will LR provide a superior print to CS3 or is it six of one, half dozen of the other?
    Thanks,
    Cynthia

    First off, it's poor form to address posts in a public forum to single individuals...this is a forum and ALL members should be encouraged to post...this is akin to asking for an answer to a question be sent directly by email.
    The resolution of raw files is xxxxpixels by xxxxpixels. That's the only way to think of resolution in Lightroom. As a result, the image may be said to be 240, 300, 360 or 480 PPI depending on the size the image will print. A small print, held closer will need more PPI than a large print hung on a wall. So the resolution of a raw file will vary depending on the size of the final cropped & printed image. As long as the PPI of the image size is between 180-480 PPI you really have no reason for up or down sampling...
    When I work on an image in Photoshop from Lightroom, I'll have an image print size in mind. I use softproofing for final tweaks to the image-often applied locally and set the PPI (Image Size without resample) to the size I want then use PhotoKit Sharpener for the resulting pixel density. Then hit save and print from Lightroom.
    The -Edit file in Lightroom has the pixel density I speced in Photoshop and the softproofing tweaks as well. I then print form Lightroom-not so much because Lightroom is capable of producing BETTER prints...but because I find it quicker and easier to print with LR's print environment.
    If I want to make a different sized print (or a print on different papaer requiring a different profile) I'll open the tiff image into Photoshop from Lightroom using the Edit Original...my original tiff file with all layers pops open, I'll make whatever changes I need and hit save again...so that one master tiff file becomes my print mater file with all the layers I may need to either use, or turn off if I don't need them. The same image, if printed 4x6 will have different pixel density than a 13x19" print, thus I'll have tow separat sets of output shaprening. But I don't need to resample the image, I resize without resampling checked to change the ratio and density.
    And yes, all of this with Lightroom's print module set to no resampling and no sharpening...minor changes in the image cell size in LR don't require going back into Photoshop for resizing...

  • Need Clarification on sga_target and sga_max_size

    HI,
    I need some clarification in SGA_TARGET and SGA_MAX_SIZE.
    I have the parameter like below.
    SGA_MAX_SIZE=10G
    SGA_TARGET=9G
    And I spread the 9G to all components like (DB_CACHE,SHARE_POOL etc.,).
    My doubt, Incase db need the memory more than 9GB, Whether it automatically take the extra 1G from sga_max_size
    or we have to change the sga_target to 10G.

    Unless and untill, we set the sga_taget=10G, The extra 1G(from sga_max_size) is not used.
    Am i correct?No - its wrong. Any change in the value of SGA_TARGET affects only the sizes of the auto-tuned components. If you increases its value then increased memory will be distributed in only among the components controlled. So, yes 1GB will be used, because have sga_max_size=10G. If you decrease the value then reduced memory is taken back by the auto-tuning policy from one or more of the auto-tuned components.
    If SGA_MAX_SIZE is greater than SGA_TARGET, you can increase SGA_TARGET without restarting the instance. Otherwise, you'd need to shutdown and restart the instance if you wanted to increase SGA_TARGET.
    Regards
    Girish Sharma

  • Need clarification on exact size/amount of Artic Silver to apply on AMD64 3000+

    I am installing the OEM HSF on my AMD64 3000+ winchester CPU using Artic Silver and I need a bit of clarification about the size/amount of the Artic Silver blob that I need to put on the CPU heat spreader.
    EDIT - I'm reading the Artic Silver instructions right now from the website:
    Quote
    Only a small amount of Arctic Silver is needed
    P4- About the size of an uncooked grain of short-grain white rice or 2/3 of a BB.
    Athlon64- About the size of one and a half uncooked grains of short-grain white rice or 3/4 of a BB.
    Could someone give me a diameter in millimetres (mm) or something? I'll go look for some rice in the house, but anyways......lol... :P
    For clarification, I want to be sure I put the proper amount of AS on the heat spreader. I don't really know how big a BB is exactly. Is a BB a ball bearing or a "BB" used in bb guns? I'm guessing the size I should use is around 3 or 4 mm? Could someone clarify this for me before I go ahead?
    How much AS should I apply to the heat spreader? How big should the round blob of AS be?
    thanx,

    Ok, but using a peice of paper may introduce dust, lint or paper particles. They don't even say to spread the AS compound around at all, they just say:
    Quote
    9. On an Intel P4 or Athlon64 type CPU with a large metal heat spreader, put a small amount of Arctic Silver onto the center of the heat spreader as shown in the photo.
    Only a small amount of Arctic Silver is needed
    P4- About the size of an uncooked grain of short-grain white rice or 2/3 of a BB.
    Athlon64- About the size of one and a half uncooked grains of short-grain white rice or 3/4 of a BB.
    10.   RECHECK to make sure no foreign contaminants are present on either the bottom of the heatsink or the top of the CPU core. Mount the heatsink on the CPU per the heatsink's instructions. Be sure to lower the heatsink straight down onto the CPU.
    Once the heatsink is properly mounted, grasp the heatsink and very gently twist it slightly clockwise and counterclockwise one time each if possible. (Just one or two degrees or so.)
    Please note that some heatsinks cannot be twisted once mounted.
    Our testing has shown that this method minimizes the possibility of air bubbles and voids in the thermal interface between the heat spreader and the heatsink. Since the vast majority of the heat from the core travels directly through the heat spreader, it is more important to have a good interface directly above the actual CPU core than it is to have the heat spreader covered with compound from corner to corner.
    So they say to put a little tiny blob on there and just attach the heatsink right on there. I am guessing that the attaching of the heatsink does the work of spreading the compound and avoids voids..lol..if you excuse the pun.

  • Need Clarification On Unicode and Upgrade-ECC6.0

    Dear All,
    I need some clarification unicode and upgrade . It would be great help if you give your time .
    We had 2 code pages - 1100,1401 in 4.6B system. We had languages - FR,EN,ES,PT and PL. The system has been upgraded to ECC6.0 non-unicode now.
    Now in I18N->System configuration (RSCPINST), only EN is listed. SPUMG asked for activation of I18N to proceed. When the I18N activation was done,  it has knocked out the code page 1401 from TCPDB table.
    Is this normal?
    But, the code pages 1401 is shown as consistent in SCP transaction.
    The system setting has changed to Single code page. Will this affect unicode migration? How did the additonal code page 1401 which was in 4.6B get knocked out
    now? How did the languages ES,FR,PT, IT and PL which were in 4.6B get knocked out of RCPINST?
    We are manually filling the vocabulary since SPUMG is not showing Scanning tabs. The language key in vocabulary is not completely set. The reprocess logs are not completely green. Will this allow unicode migration now? Can we start unicode migration even with this status.
    Regards,
    Santosh
    Edited by: santosh suryavanshi on Nov 18, 2010 11:11 PM
    Edited by: santosh suryavanshi on Nov 18, 2010 11:11 PM

    Hi Santosh,
    SAP ECC 6.0 is not supported with MDMP. This is the reason for the behaviour in RSCPINST.
    The standard way for an upgrade based on start release 4.6B with MDMP would be TU&UC (see SAP note 959698).
    Do you follow this procedure ?
    Best regards,
    Nils Buerckel
    SAP AG

  • Need clarification on select query Urgent!!!!!

    Hi ,
    i want clarification about this query , because i want improve performence to report in ST05 it is showing 1,40,250 time so i need to do some thing on this please any body help me out.
    SELECT  matnr  werks  dispo
                 FROM   marc
                 INTO   TABLE  it_marc
                        WHERE  matnr  IN  s_matnr  AND
                               werks  IN  s_werks  AND
                               dispo  EQ  p_dispo.
        DESCRIBE TABLE it_marc  LINES w_lines.
        IF  w_lines  IS INITIAL.
          MESSAGE e518(zv) WITH p_dispo.
        ENDIF.
      ENDIF.
    thanks,
    murali.

    SELECT matnr
      FROM mara
      INTO TABLE it_mara
      WHERE matnr IN s_matnr.
    IF sy-subrc eq 0.
      SELECT matnr werks dispo
        FROM marc
        INTO TABLE it_marc
        FOR ALL ENTRIES IN it_mara
        WHERE matnr = it_mara-matnr
          AND werks IN s_werks
          AND dispo EQ p_dispo.
      DESCRIBE TABLE it_marc LINES w_lines.
      IF w_lines IS INITIAL.
        MESSAGE e518(zv) WITH p_dispo.
      ENDIF.
    ENDIF.
    I am sorry, but I comment also bad recommendations, if I find any. And the above example such
    a bad recommendation.
    SELECT matnr
      FROM mara
      INTO TABLE it_mara
      WHERE matnr IN s_matnr.
    The first select selects much more than you really need, while there is only a select single necessary.
      DESCRIBE TABLE it_marc LINES w_lines.
      IF w_lines IS INITIAL.
    It is also not necessary to count 1000 of lines, if you want to know whether there was at least one records, a check
    of a sy-subrc is much better. This should also be done in the original coding.
    As said above, check what is in s_matnr and s_werks and what is suppossed to come back.
    Siegfried

Maybe you are looking for

  • DV export problems with grid line noise on export

    On exporting in Premier Pro CC I am seeing faint grid lines across the whole picture that are not in the project timeline or the source clips - in other words looks fine in the project but not when exported out. Looks a bit like moire pattern but muc

  • Filing Systems

    Hey everyone, I am getting to the point where I have a lot of projects going. Some of them are just little ideas floating around, some are actual projects. I have created a fairly simple system to categorize them so that I can access them quickly now

  • Difference in storage benchmark resuIts: iometer vs. SQLIO

    Hey guys, Just wondering if anyone could explain a difference in benchmark numbers between iometer and SQLIO. To set this question up:  I'm trying to get baseline performance for a new SAN installation, and have been testing with SQLIO for a particul

  • Export dumps failing with ORA-31623: a job is not attached to this session

    Hi, Oracle Version : 11.1.0.7.0 OS Solaris : 10 Export partition dumps failing with the Error message ORA-31623: a job is not attached to this session via the specified handle. When i checked the table dba_datapump_jobs, i found several table is in "

  • I forget my answer of question for buying apps

    I forget my answer of question for buying apps