Question about cache of sequence

desc table temp1
id number(p.k)
comp number(5)
i have a trigger call BI_TEMP1
Trigger Type BEFORE EACH ROW
Triggering Event INSERT
begin
for c1 in (
select TEMP1_SEQ.nextval next_val
from dual
) loop
:new.ID := c1.next_val;
end loop;
end;
the sequence define as
Min Value 1
Max Value 999999999999999999999999999
Increment By 1
i have program which do insert to the table of 14 rows.
after the program finished i did :
select max(id) from temp1;
14
when i run the program again to insert those 14 rows again the sequence start from number 21
instead of 15
now i know that this is because of the cache tha equals 20.
my question is shall i use cache or not ?
i read about the cache option and i did not get the advantages of it
in which cases it is better to me to use cache and in which one is not?
thanks in advance

First of all, using a sequence is no guarentee that you'll end up without gaps! Transactions can be rolled back, etc, just like coffee can be spilt on your chequebook or whatever.
A cache for the sequence values is useful because it means that Oracle can store in memory the next sequence values, meaning that you cut down on the work needed to be done. If you don't have a cache, this is what happens:
1. Get the next value from the sequence.
2. use the value
3. Get the next value from the sequence.
4. use the value
5. Get the next value from the sequence.
6. use the value
etc...
However, if you have a cache of 5, this is what happens:
1. Get the next 5 values from the sequence and store in memory.
2. use the first value
3. use the second value
4. use the third value
5. use the fourth value
6. use the fifth value
7. Get the next 5 values from the sequence and store in memory.
8. use the first value
etc...
So a cache reduces the amount of calls to the sequence. However, as soon as the memory is wiped (a database bounce, shared_pool flush, etc) then the sequence numbers are gone, and the next time you ask for the next sequence value, it has to go back the sequence.

Similar Messages

  • A question about cache group error in TimesTen 7.0.5

    hello, chris:
    we got some errors about cache group :
    2008-09-21 08:56:15.99 Err : ORA: 229574: ora-229574-3085-ogTblGC00405: Failed calling OCI function: OCIStmtFetch()
    2008-09-21 08:56:15.99 Err : ORA: 229574: ora-229574-3085-raUtils00373: Oracle native error code = 1405, msg = ORA-01405: fetched column value is NULL
    2008-09-21 08:56:28.16 Err : ORA: 229576: ora-229576-2057-raStuff09837: Unexpected row count. Expecting 1. Got 0.
    and the exact scene is: our oracle server was restart for some reason, but we didnot restart the cache group agent. then iit start appear those errors informations.
    we want to know, if the oracle server restart, whether we need to restart cache agent?? thank you..

    Yes, the tracking table will track all changes to the associated base table. Only changes that meet the cache group WHERE clause predicate will be refreshed to TimesTen.
    The tracking table is managed automatically by the cache agent. As long as the cache agent is running and AUTOREFRESH is occurring the table will be space managed and old data will be purged.
    It is okay if very occasionally an AUTOREFRESH is unable to complete within its defined interval but if this happens with any regularity then this is a problem since this situation is unsustainable. To remedy this you need to try one or more of:
    1. Tune execution of AUTOREFRESH queries in Oracle. This may mean adding additional indexes to some of the cached Oracle tables. There is an article on this in MetaLink (doc note 473493.1).
    2. Increase the AUTOREFRESH interval so that a refresh can always complete within the defined interval.
    In any event it is important that you have enough space to cope with the 'steady state' size of the tracking table. If the cache agent will not be running for any significant length of time you need to manually cleanup the tracking table. In TimesTen 11g a script to do this is provided but it is not officially supported in TimesTen 7.0.
    If the rate of updates on the base table is such that you cannot arrive at a sustainable situation by tuning etc. then you will need to consider more radical options such as breaking the table into multiple separate tables :-(
    Chris

  • Question about cache

    hi all
    we just upgrade the apex from version 1.6.1.00.03 to version 3.1.2.00.02
    because this upgarde was a massive upgrade i need to do some tests to my application.
    now i've noticed that in the new version there is a cache option in the region,edit attribute of page etc..
    i want to ask : since there is no option of cache in the old version ,
    and the application in the new version it's an import fom the old version .
    (i did export from the old and import to the new) ,
    i want to know is the default of the apex 3.1 is to do cache to this pages ?.
    or the default is page,region not cache unless i decided so?.
    or the page and the region default cache are change from each other ?
    this is important for me to know , because if the default of the application is cache, i need to go every page and to change it.
    second question is:
    let's say that the pages are cache , is that say that if i have process in this page (after submit,before header etc.) they do not perform?
    thanks in advance
    Naama

    Hi Scott,
    In her first post, Naama is talking about page processes, in general, and mentioned ‘after submit’ and ‘before header’ – “is that say that if i have process in this page (after submit,before header etc.) they do not perform?” – and your response was also general – “That's correct”. I agree that for cached pages, the Show related processes, like ‘before header’, are not running, but the Accept processes, like ‘after submit’? Aren’t they fired regardless of the cache status?
    Thanks,
    Arie.

  • A question about upgrades and mroe upgrades

    I've got a question about the upgrade sequence and costs thereof
    I have CS5.5 Production Premium. Suppose I upgrade to CS6 production Premium. If, when CS6.5 comes out and I decided to upgrade, would it be considered an upgrade from 5.5 or 6 for pricing purposes?
    I presume the older the software the more expensive the upgrade but if I'm upgrading from my CS6 upgrade would it be considered an upgrade from CS5.5 9the original suite) or an upgrade from CS6?
    Thank you

    You have a CS5.5 now. Now, you upgrade to CS6.
    When CS6.5 come out. It will be considered as an upgrade from CS6 (and not CS5.5 which is your original suite). Hope this is clear
    Jagadish

  • I have a question about Lightroom 5... I used it last night, I go to get on it today and its will not open. I have an error msg "Lightroom encountered an error when reading from its preview cache and needs to quit" Lightroom will attempt to fix the proble

    I have a question about Lightroom 5... I used it last night, I go to get on it today and its will not open. I have an error msg "Lightroom encountered an error when reading from its preview cache and needs to quit" Lightroom will attempt to fix the problem when reopened

    https://forums.adobe.com/message/6219922#6219922
    See if the issue in the thread above helps you to solve your problem.

  • Question about LRU in a replicated cache

    Hi Tangosol,
    I have a question about how the LRU eviction policy works in a replicated cache that uses a local cache for its backing map. My cache config looks like this:
    <replicated-scheme>
    <scheme-name>local-repl-scheme</scheme-name>
    <backing-map-scheme>
    <local-scheme>
    <scheme-ref>base-local-scheme</scheme-ref>
    </local-scheme>
    </backing-map-scheme>
    </replicated-scheme>
    <local-scheme>
    <scheme-name>base-local-scheme</scheme-name>
    <eviction-policy>LRU</eviction-policy>
    <high-units>50</high-units>
    <low-units>20</low-units>
    <expiry-delay/>
    <flush-delay/>
    </local-scheme>
    My test code does the following:
    1. Inserts 50 entries into the cache
    2. Checks to see that the cache size is 50
    3. Inserts 1 additional entry (as I understand it, this should cause the eviction logic to kick-in)
    4. Checks the cache size again, expecting it to now be 20
    With HYBRID and LFU eviction policies, the above logic works exactly as expected. When I switch to LRU however, the code at step 2 always returns a value significantly less than 50. All 50 inserts appear to complete successfully, so I can only assume that some of the entries have already been evicted by the time I get to step 2.
    Any thoughts?
    Thanks.
    Pete L.
    Addendum:
    As usual, in attempting to boil this issue down to its essential elements, I left out some details that turned out to be important. The logic that causes the condition to occur looks more like:
    1. Loop 2 times:
    2. Create named cache instance "TestReplCache"
    3. Insert 50 cache entries
    4. Verify that cache size == 50
    5. Insert 1 additional entry
    6. Verify that cache size == 20
    7. call cache.release()
    8. End Loop
    With this logic, the problem occurs on the second pass of the loop. Step 4 reports a cache size of < 50. This happens with LRU, LFU, and HYBRID-- so my initial characterization of this problem is incorrect. The salient details appear to be that I am using the same cache name each pass of the loop and that I am calling release() at the end of the loop. (If I call destroy() instead, all works as expected.)
    So... my revised question(s) would be: is this behavior expected? Is calling destroy() my only recourse?
    Message was edited by: planeski

    Robert,
    Attached are my sample code and cache config files. The code is a bit contrived-- it's extracted from a JUnit test case. Typically, we wouldn't re-use the same cache name in this way. What caught my eye however, was the fact that this same test case does not exhibit this behavior when running against a local cache directly (as opposed to a repl cache backed by a local cache.)
    Why call release? Well, again, when running this same test case against a local cache, you have to call release or it won't work. I figured the same applied to a repl cache backed by a local cache.
    Now that I understand this is more a byproduct of how my unit tests are written and not an issue with LRU eviction (as I originally thought), it's not a big deal-- more of a curiosity than a problem.
    Pete L.<br><br> <b> Attachment: </b><br>coherence-cache-config.xml <br> (*To use this attachment you will need to rename 545.bin to coherence-cache-config.xml after the download is complete.)<br><br> <b> Attachment: </b><br>LruTest.java <br> (*To use this attachment you will need to rename 546.bin to LruTest.java after the download is complete.)

  • SUP - 2 questions about the CDB (cache database)

    Hi,
    I have 2 questions about the cache database and the cache groups:
    1 - How does the "On demand" cache group policy exactly works? I know that online cache group is without storing any data on the CDB making direct requests to de backend from the device, the DCN is based on updating from the backend, the scheduled is based on a time period, but I don't understand how the "on demand" exactly works, and why it has a time period too.
    2 - Is it possible to query the cache database table to check the data that SUP has stored? How can I do this?
    Thank you!

    I posted a similar question in SUP Apps project not too long ago and  Paul Horan provided this useful reply:
    Create a "Sybase ASA v12.x for Unwired Server" connection profile in the Enterprise Explorer.  I named mine CDB.
    : Host = localhost (or whatever the machine name is)
    : Port = 5200
    : Database name = "default"
    : User Name = "dba"
    : Password = "sql"
    Obviously, change the userid/password to match, if you changed them during install time.
    Connect, and you'll see the "default" database displayed.
    Navigate down through the Tables folder, and the first subfolder is labeled something like [#should_delete_sk ...]  Start there.
    You'll see a bunch of tables with the naming convention "D1" + package name + package version + MBO name.  These are the cache tables for the MBOs.

  • App-V 4.6 - A question about sequencing Add-ons

    Hello,
    just a short question about app-v and add-ons:
    I want to sequence and publish an add-on for Microsoft Excel in app-v 4.6. But is it right that (if i want to use an add-on later) i have to use a virtual (sequenced) primary program?
    I do not find these information during my research and I am not really shure about it.
    It would be nice if it is possible to use a sequenced add-on with a local installation of e.g. Excel.
    Thank you very much!
    Gruß ST_Jan

    Hello,
    No, you can sequence the add-in and then create a shortcut to Excel.
    See this generic guide;
    http://blogs.technet.com/b/gladiatormsft/archive/2012/09/02/app-v-4-6-oldie-but-goodie-using-url-shortcuts-to-simply-user-experience-when-running-virtualized-ie-plug-ins-plus-a-bonus-tip.aspx
    and this topic;
    http://blogs.technet.com/b/virtualvibes/archive/2013/05/01/sequencing-a-shortcut-in-app-v-4-6-local-and-virtual-interaction.aspx
    Nicke Källén | The Knack| Twitter:
    @Znackattack

  • Follow up question about image not looking good in canvas-Shane?

    I guess this is a follow up question for Shane....
    You wrote this in response to someone a year ago which helped me out.
    Shane's Stock Answer #49 - Why is the quality different between what I see in the Viewer and what I see in the Canvas?
    Well... the viewer is just that-- a viewer. It will display anything that fcp will recognize as usable video or graphics. The canvas is a viewer too, but at the pixel dimension specified by the settings of your project and sequence.
    For example, if your graphic or footage is much higher resolution than your 720x480 DV sequence, FCP is interpolating down your file to fit the settings of the sequence. Usually this makes it look not so hot. DV is a 5:1 compression working with a 4:1:1 color depth. Your pristine picture images and graphics are being crushed.
    Same with picture files. HIgh res pics now adopt the sequence settings and will render to those specs, and most likely they are not as high quality.
    So, based on your answer, I made an HDV sequence, dropped my DVCPro50-NTSC footage and my tiffs into the sequence and now the stills look great. Is this an okay workaround or do you have another suggestion? I'm worried about this HD sequence taking too long to make a compressed QT from in the end.
    Thanks!

    Photo JPEG is a good option.  But this all depends on what your final output will be.  If you are going to make a DVD, then that is SD, so using an HD sequence setting makes no sense.  Photo JPEG is good, but not realtime in FCP.  DV50 or ProRes NTSC are good options.
    Unless you are making an HD master...in which case ProRes 422 for HD is good

  • Some questions about the integration between BIEE and EBS

    Hi, dear,
    I'm a new bie of BIEE. In these days, have a look about BIEE architecture and the BIEE components. In the next project, there are some work about BIEE development based on EBS application. I have some questions about the integration :
    1) generally, is the BIEE database and application server decentralized with EBS database and application? Both BIEE 10g and 11g version can be integrated with EBS R12?
    2) In BIEE administrator tool, the first step is to create physical tables. if the source appliation is EBS, is it still needed to create the physical tables?
    3) if the physical tables creation is needed, how to complete the data transfer from the EBS source tables to BIEE physical tables? which ETL tool is prefer for most developers? warehouse builder or Oracle Data Integration?
    4) During data transfer phase, if there are many many large volume data needs to transfer, how to keep the completeness? for example, it needs to transfer 1 million rows from source database to BIEE physical tables, when 50%is completed, the users try to open the BIEE report, can they see the new 50% data on the reports? is there some transaction control in ETL phase?
    could anyone give some guide for me? I'm very appreciated if you can also give any other information.
    Thanks in advance.

    1) generally, is the BIEE database and application server decentralized with EBS database and application? Both BIEE 10g and 11g version can be integrated with EBS R12?You, shud consider OBI Application here which uses OBIEE as a reporting tool with different pre-built modules. Both 10g & 11g comes with different versions of BI apps which supports sources like Siebel CRM, EBS, Peoplesoft, JD Edwards etc..
    2) In BIEE administrator tool, the first step is to create physical tables. if the source appliation is EBS, is it still needed to create the physical tables?Its independent of any soure. This is OBIEE modeling to create RPD with all the layers. If you build it from scratch then you will require to create all the layers else if BI Apps is used then you will get pre-built RPD along with other pre-built components.
    3) if the physical tables creation is needed, how to complete the data transfer from the EBS source tables to BIEE physical tables? which ETL tool is prefer for most developers? warehouse builder or Oracle Data Integration?BI apps comes with pre-built ETL mapping to use with the tools majorly with Informatica. Only BI Apps 7.9.5.2 comes with ODI but oracle has plans to have only ODI for any further releases.
    4) During data transfer phase, if there are many many large volume data needs to transfer, how to keep the completeness? for example, it needs to transfer 1 million rows from source database to BIEE physical tables, when 50%is completed, the users try to open the BIEE report, can they see the new 50% data on the reports? is there some transaction control in ETL phase?User will still see old data because its good to turn on Cache and purge it after every load.
    Refer..http://www.oracle.com/us/solutions/ent-performance-bi/bi-applications-066544.html
    and many more docs on google
    Hope this helps

  • Questions about editing with io HD or Kona 3 cards

    My production company is switching from Avid to Final Cut Pro. I have a few editing system questions (not ingesting and outputting, just questions about systems for the actual editors - we will have mac pros with either kona 3 or io HD for ingest and outputs)
    1) Our editors work from home so they most likely will be using MacBook Pros - Intel Core 2 Duo 2.6GHz 4GB computers with eSata drives to work on uncompressed HD, will they be able to work more quickly in FCP if they are using the new Mac Pro 8-Core (2 Quad-Core 2.8GHz Intel Xeon) or will the mac book pro's be able to hold their own with editing hour long documentaries, uncompressed HD
    2) Will having an AJA Kona 3 (if we get the editors mac pros) or io HD (for the mac book pros) connected be a significant help to the editor's and their process, will it speed up their work, will it allow them to edit sequences without having to render clips of different formats? Or will they be just as well off without editing with the io HD?
    I'm just trying to get a better understanding of the necessity of the AJA hardware in terms of working with the editors to help them do what they have to do with projects that have been shot on many formats- DVCPro tapes, Aiptek cameras that create QTs and P2 footage.
    Thanks

    1. with the IoHD, laptops become OK working with ProRes and simply eSata setups. Without the Io, they can't view externally on a video monitor (a must in my book). It will not speed up rendering a ton, nor will it save renders of mixed formats. The idea is to get all source footage to ProRes with the Io, and then the Io also lifts the CPU from having to convert ProRes to something you can monitor externally on a video monitor, and record back to any tape format you want... all in real time.
    2. Kona 3's on Towers would run circles around render times on a Laptop... no matter what the codec, but the Kona does not really speed renders up. That's a function of the CPU and just how fast is it. (lots of CPU's at faster speeds will speed up render times).
    I'd recommend you capture to ProRes with Io's or the Kona 3 and don't work in uncompressed HD. You gain nothing doing it quality wise at all. And you only use up a ton of disk space (6 times the size in fact) capturing and working in uncompressed HD, which from your post, you're not shooting anyway. The lovely thing about ProRes is that it's visually lossless, efficient, and speeds up the editing process. Mixing formats can be done, but it's better to go to ProRes for all source footage, and edit that way.
    With either the Kona or the Io, you then can output to uncompressed HD tape... that's what they do for you no matter what codec you've edited in. ProRes is designed to be the codec of choice for all HD projects when you're shooting different formats especially... Get them all singing the same tune in your editing stations, and you'll be a much happier camper. Only reason to buy laptops is portability... otherwise you're much better off with towers and the Kona 3 speed wise.
    Jerry
    Message was edited by: Jerry Hofmann

  • Question about Kurts comments discussing the seperation of AIA & CDP - Test Lab Guide: Deploying an AD CS Two-Tier PKI Hierarchy - Kurt L Hudson MSFT

    Question about the sentence in bold. What is the meaning behind this comment?
    How would you separate the role of the AIA and CDP from a CA subordinate server? I can see where I add a CES and CEP server which has those as well, but I don't completely understand his comment. Because in this second step, (http://technet.microsoft.com/en-us/library/tlg-key-based-renewal.aspx)
    he shows how to implement CES and CEP.
    This is from the guide located at: http://technet.microsoft.com/library/hh831348.aspx
    Step 3: Configure APP1 to distribute certificates and CRLs
    In the extensions of the root CA, it was stated that the CRL from the root CA would be available via http://www.contoso.com/pki. Currently, there is not a PKI virtual directory on APP1, so one must be created.
    In a production environment, you would typically separate the issuing CA role from the role of hosting the AIA and CDP.
    However, this lab combines both in order to reduce the number of resources needed to complete the lab.
    Thanks,
    James

    My concern is, they have a 2-3k base of xp systems, over this year they are migrating them to Windows 7. During this time they will also be upgrading hardware for the existing windows 7 machines. The turnover of certificates are going to be high, which
    from what I've read here, it worries me.
    http://blogs.technet.com/b/askds/archive/2009/06/24/implementing-an-ocsp-responder-part-i-introducing-ocsp.aspx
    The application then can go to those locations to download the CRL. There are, however, some potential issues with this scenario. CRLs over time can get rather large
    depending on the number of certificates issued and revoked. If CRLs grow to a large size, and many clients have to download CRLs, this can have a negative impact on network performance. More importantly, by
    default Windows clients will timeout after 15 seconds while trying to download a CRL. Additionally,
    CRLs have information about every currently valid certificate that has been revoked, which is an excessive amount of data given the fact that an application may only need the revocation status for a few certificates. So,
    aside from downloading the CRL, the application or the OS has to parse the CRL and find a match for the serial number of the certificate that has been revoked.
    With the above limitations, which mostly revolve around scalability, it is clear that there are some drawbacks to using CRLs. Hence, the introduction of Online Certificate
    Status Protocol (OCSP). OCSP reduces the overhead associated with CRLs. There are server/client components to OCSP: The OCSP responder, which is the server component, and the OCSP Client. The OCSP Responder accepts status
    requests from OCSP Clients. When the OCSP Responder receives the request from the client it then needs to determine the status of the certificate using the serial number presented by the client. First the OCSP Responder determines if it has any cached responses
    for the same request. If it does, it can then send that response to the client. If there is no cached response, the OCSP Responder then checks to see if it has the CRL issued by the CA cached locally on the OCSP. If it does, it can check the revocation status
    locally, and send a response to the client stating whether the certificate is valid or revoked. The response is signed by the OCSP Signing Certificate that is selected during installation. If the OCSP does not have the CRL cached locally, the OCSP Responder
    can retrieve the CRL from the CDP locations listed in the certificate. The OCSP Responder then can parse the CRL to determine the revocation status, and send the appropriate response to the client.

  • Questions about CIN tax procedure choice and pricing schemas

    Hi all,
    I have to implement SAP on a Indian company and I'm verifying all particularity about this country (in particular tax procedures and the great number of differents tax conditions used).
    I have two questions about tax procedures and pricing schemas. Every feedback about thse points will be appreciated.
    a) To choose tax procedure TAXINN or TAXINJ which are the elements that I have to consider?
         I have read lot of documentation about CIN implementation and Iu2019m oriented to choose TAXINN schema, but If possible I  would  to understand better which are on behalf of one choice or another.
    b) To define pricing schemas for India, after check with local users and using examples of documents (in particular tax invoice) actually produced, I have understood that taxes have to be applied on amount defined starting from price list, minus discounts recognized to customer plus surcharges eventually to bill (packing, transport,  etc.).
    Itu2019s correct for any type of taxes that tax amount is calculated on u201Cnet valueu201D defined at item level or there are exceptions to this rule?
    Thanks in advance
    Gianpaolo

    hi,
    this is to inform you that,
    a) About point 1 I know the difference between the 2 tax procedures (conditions or formulas). I also have read in others post in the FORUM that TAXINN is preferable. So I would to understand which are the advantages to choose instead TAXINJ. There are particular reasons or it'a only an alternative customizing setting?
    a.a. for give for posting the link : plese give me the advantages of TAXINJ and TAXINN
    CIN - TAXINN and TAXINJ
    b) About point 2, to define which value has to be used as base amount to calculate taxes isn't a choice, but is defined depending by fiscal requirement of the country, in this case India fiscal requirement. I know that, as Lakshmipathi
    write as answer on my question, exception could be, but it was important for me to understand if I have understood correctly the sequence of the pricing condition in the schema in "normal" situation.
    b.b. you can create your own pricing procedure for this and go ahead.
    hope this clears your issue.
    balajia

  • One question about Pricing and Conditions puzzle me for a long time!

    One question about Pricing and Conditions puzzle me for a long time.I take one example to explain my question:
    1-First,my sale order use pricing procedure RVAA01.
    2-Next,the pricing procedure RVAA01 have some condition type,such as EK01(Actual Costs),PR00(Price)....,and so on.
    3-Next,the condition type PR00 define the Access Sequences PR00 as it's Access Sequences.
    4-Next,the Access Sequences PR00 have some Condition tables,such as:
         table 118 : "Empties" Prices (Material-Dependent)
         table 5 : Customer/Material
         table 6 : Price List Type/Currency/Material
         table 4 : Material
    5-Next,I need to maintain Condition tables's Records.Such as the table 5(Customer/Material).I guess the sap would supply one screen for me to input the data of table 5.At this screen,the sap would ask me to select one table,such as table 5.When I select the table 5,the sap would go to the screen to let me input the data of table 5.But when I use the T-CODE VK31 or VK32 to maintain Condition tables's Record,I found it's total different from my guess:
    A-First,I can not found one place for me to open the table,such as table 5,to let me input the data?
    B-Second,For example,when I select the VK31->Discounts/Surcharges->By Customer/Material,the sap show the grid view at the right side.At the each line of the grid view,you need to select the Condition Type at the first field.And this make me confused very much.Why the sap need me to select one Condition Type but not the Condition table?To the normal logic,it ought not to select Condition table but not the Condition Type!
    Dear all,I'm a new one in sd.May be this is a very stupid question.But it did puzzle me for a long time.If any one can  explain this question in detail and let me understand the concept,I will appreciate him/her very much.Thank you.

    Hi,
    You said that you are using the T.codes VK31 or VK32.
    These transaction codes are used to enter condition records for standard condition types. As you can see a grid left side having all the standard condition types like price, discounts, taxes, frieghts.
    Pl check using T.code VK11 OR VK12 (change mode)
    Here you can enter the required condition type, in the intial screen. (like PR00, MWST, K004, K005 .....etc)
    After giving the condition type, press enter or click on Combinations icon on top of the screen. Then you can see all the condition tables which you maintained for that condition type. Like as you said table 118, table 5, table 6 and table 4.
    You can select any table and press enter, then you can go into the screen in which you have all the field cataglogues you maintained for that table. For example you selected combination of Customer/Material (table 5) then after you press enter then you can see customer field on top, and material fields.
    You can give all the required values and save the conditon record.
    Hope this is clear.
    REWARD IF HELPFUL.
    Regards,
    praveen

  • Basic questions about macbook pro + external monitor

    Hi,
    I have some very basic questions about using a Macbook Pro + external display. I don't actually have them but need to know how things works.
    So, here they are:
    1) Can I use the external display as the main display?
    2) Will the external display run with its resolution or with that of the mbook pro?
    3) Somewhere I read that you cannot keep the macbook pro open and get the full resolution of the external display. Is that true ?
    4) Is it dangerous to keep the mbook closed while using the external display?
    5) Does the usage of the external display impact on mbook performance?
    I know...a lot of questions , but would be nice if someone could help me.
    Thanks.

    Hi - I am presently using an external display.
    To answer your questions in sequence:
    1. Yes you can use the external display as your main display. The way to enable that mode is to put your MacBook Pro to sleep, attach the external display. Wake up your MBPro with the lid closed and you will see the external display as you main display. You can alternatively set the external to "Mirror" your notebook by using preferences/display.
    2. The external display will run at its resolution although you can adjust and calibrate it using Preferences/Display.
    3. Not true. You get max resolution on both displays. Of course you may have to tweak as mentioned above.
    4. Not at all. I use this mode all the time. Just make sure you initially set up as mentioned above and your LCD on the MBPro will stay off.
    5. I have not seen any performance degradation whatsoever.
    Hope this helps.

Maybe you are looking for

  • G5 Crashes on Install - Log says Unable to Locate Handle 30

    I am getting repeated "Unable to locate handle" messages during install. Sometimes it will stall for a while, but eventually it crashes. I was able to install Leopard on my original internal drive, but NOT on my Western Digital 500GB drive (which I u

  • CS4 Bridge Crashing on Startup

    Finally got CS4 at work, installed yesterday and everything fine. Today when going into Bridge, I got one of those Bridge experienced a problem and needs to close, errors. I rebooted, tried again, and got into bridge but then came up with the crash a

  • ODBC, 32bit Windows 7 and VBA

    I've got a VBA script that works fine with any v10 of the Oracle client on Windows XP. However, on Windows 7 (Professional, 32 bit, no UAC, running as Administrator), it hangs and locks up the calling program at the objConnection.Close part of the sc

  • Black space around webpages.

    Hi I am very new to Dreamweaver.  I made a site in Golive some time ago and I now need to update it and refresh it.  I had a problem in Golive with a black space around all my pages which you can see at www.goldenumber.co.uk  I was never able to sort

  • Adobe Media Encoder - QT H264 bitrate issues

    I'm batch exporting HD flipvideo through AME. I'm able to convert the files to desired H264 format if I leave the bitrate unchecked. If I check the bitrate and enter the desired target, the resulting QT file is converted with terrible, and I mean ter