Question about cache

hi all
we just upgrade the apex from version 1.6.1.00.03 to version 3.1.2.00.02
because this upgarde was a massive upgrade i need to do some tests to my application.
now i've noticed that in the new version there is a cache option in the region,edit attribute of page etc..
i want to ask : since there is no option of cache in the old version ,
and the application in the new version it's an import fom the old version .
(i did export from the old and import to the new) ,
i want to know is the default of the apex 3.1 is to do cache to this pages ?.
or the default is page,region not cache unless i decided so?.
or the page and the region default cache are change from each other ?
this is important for me to know , because if the default of the application is cache, i need to go every page and to change it.
second question is:
let's say that the pages are cache , is that say that if i have process in this page (after submit,before header etc.) they do not perform?
thanks in advance
Naama

Hi Scott,
In her first post, Naama is talking about page processes, in general, and mentioned ‘after submit’ and ‘before header’ – “is that say that if i have process in this page (after submit,before header etc.) they do not perform?” – and your response was also general – “That's correct”. I agree that for cached pages, the Show related processes, like ‘before header’, are not running, but the Accept processes, like ‘after submit’? Aren’t they fired regardless of the cache status?
Thanks,
Arie.

Similar Messages

  • A question about cache group error in TimesTen 7.0.5

    hello, chris:
    we got some errors about cache group :
    2008-09-21 08:56:15.99 Err : ORA: 229574: ora-229574-3085-ogTblGC00405: Failed calling OCI function: OCIStmtFetch()
    2008-09-21 08:56:15.99 Err : ORA: 229574: ora-229574-3085-raUtils00373: Oracle native error code = 1405, msg = ORA-01405: fetched column value is NULL
    2008-09-21 08:56:28.16 Err : ORA: 229576: ora-229576-2057-raStuff09837: Unexpected row count. Expecting 1. Got 0.
    and the exact scene is: our oracle server was restart for some reason, but we didnot restart the cache group agent. then iit start appear those errors informations.
    we want to know, if the oracle server restart, whether we need to restart cache agent?? thank you..

    Yes, the tracking table will track all changes to the associated base table. Only changes that meet the cache group WHERE clause predicate will be refreshed to TimesTen.
    The tracking table is managed automatically by the cache agent. As long as the cache agent is running and AUTOREFRESH is occurring the table will be space managed and old data will be purged.
    It is okay if very occasionally an AUTOREFRESH is unable to complete within its defined interval but if this happens with any regularity then this is a problem since this situation is unsustainable. To remedy this you need to try one or more of:
    1. Tune execution of AUTOREFRESH queries in Oracle. This may mean adding additional indexes to some of the cached Oracle tables. There is an article on this in MetaLink (doc note 473493.1).
    2. Increase the AUTOREFRESH interval so that a refresh can always complete within the defined interval.
    In any event it is important that you have enough space to cope with the 'steady state' size of the tracking table. If the cache agent will not be running for any significant length of time you need to manually cleanup the tracking table. In TimesTen 11g a script to do this is provided but it is not officially supported in TimesTen 7.0.
    If the rate of updates on the base table is such that you cannot arrive at a sustainable situation by tuning etc. then you will need to consider more radical options such as breaking the table into multiple separate tables :-(
    Chris

  • Question about cache of sequence

    desc table temp1
    id number(p.k)
    comp number(5)
    i have a trigger call BI_TEMP1
    Trigger Type BEFORE EACH ROW
    Triggering Event INSERT
    begin
    for c1 in (
    select TEMP1_SEQ.nextval next_val
    from dual
    ) loop
    :new.ID := c1.next_val;
    end loop;
    end;
    the sequence define as
    Min Value 1
    Max Value 999999999999999999999999999
    Increment By 1
    i have program which do insert to the table of 14 rows.
    after the program finished i did :
    select max(id) from temp1;
    14
    when i run the program again to insert those 14 rows again the sequence start from number 21
    instead of 15
    now i know that this is because of the cache tha equals 20.
    my question is shall i use cache or not ?
    i read about the cache option and i did not get the advantages of it
    in which cases it is better to me to use cache and in which one is not?
    thanks in advance

    First of all, using a sequence is no guarentee that you'll end up without gaps! Transactions can be rolled back, etc, just like coffee can be spilt on your chequebook or whatever.
    A cache for the sequence values is useful because it means that Oracle can store in memory the next sequence values, meaning that you cut down on the work needed to be done. If you don't have a cache, this is what happens:
    1. Get the next value from the sequence.
    2. use the value
    3. Get the next value from the sequence.
    4. use the value
    5. Get the next value from the sequence.
    6. use the value
    etc...
    However, if you have a cache of 5, this is what happens:
    1. Get the next 5 values from the sequence and store in memory.
    2. use the first value
    3. use the second value
    4. use the third value
    5. use the fourth value
    6. use the fifth value
    7. Get the next 5 values from the sequence and store in memory.
    8. use the first value
    etc...
    So a cache reduces the amount of calls to the sequence. However, as soon as the memory is wiped (a database bounce, shared_pool flush, etc) then the sequence numbers are gone, and the next time you ask for the next sequence value, it has to go back the sequence.

  • I have a question about Lightroom 5... I used it last night, I go to get on it today and its will not open. I have an error msg "Lightroom encountered an error when reading from its preview cache and needs to quit" Lightroom will attempt to fix the proble

    I have a question about Lightroom 5... I used it last night, I go to get on it today and its will not open. I have an error msg "Lightroom encountered an error when reading from its preview cache and needs to quit" Lightroom will attempt to fix the problem when reopened

    https://forums.adobe.com/message/6219922#6219922
    See if the issue in the thread above helps you to solve your problem.

  • Question about LRU in a replicated cache

    Hi Tangosol,
    I have a question about how the LRU eviction policy works in a replicated cache that uses a local cache for its backing map. My cache config looks like this:
    <replicated-scheme>
    <scheme-name>local-repl-scheme</scheme-name>
    <backing-map-scheme>
    <local-scheme>
    <scheme-ref>base-local-scheme</scheme-ref>
    </local-scheme>
    </backing-map-scheme>
    </replicated-scheme>
    <local-scheme>
    <scheme-name>base-local-scheme</scheme-name>
    <eviction-policy>LRU</eviction-policy>
    <high-units>50</high-units>
    <low-units>20</low-units>
    <expiry-delay/>
    <flush-delay/>
    </local-scheme>
    My test code does the following:
    1. Inserts 50 entries into the cache
    2. Checks to see that the cache size is 50
    3. Inserts 1 additional entry (as I understand it, this should cause the eviction logic to kick-in)
    4. Checks the cache size again, expecting it to now be 20
    With HYBRID and LFU eviction policies, the above logic works exactly as expected. When I switch to LRU however, the code at step 2 always returns a value significantly less than 50. All 50 inserts appear to complete successfully, so I can only assume that some of the entries have already been evicted by the time I get to step 2.
    Any thoughts?
    Thanks.
    Pete L.
    Addendum:
    As usual, in attempting to boil this issue down to its essential elements, I left out some details that turned out to be important. The logic that causes the condition to occur looks more like:
    1. Loop 2 times:
    2. Create named cache instance "TestReplCache"
    3. Insert 50 cache entries
    4. Verify that cache size == 50
    5. Insert 1 additional entry
    6. Verify that cache size == 20
    7. call cache.release()
    8. End Loop
    With this logic, the problem occurs on the second pass of the loop. Step 4 reports a cache size of < 50. This happens with LRU, LFU, and HYBRID-- so my initial characterization of this problem is incorrect. The salient details appear to be that I am using the same cache name each pass of the loop and that I am calling release() at the end of the loop. (If I call destroy() instead, all works as expected.)
    So... my revised question(s) would be: is this behavior expected? Is calling destroy() my only recourse?
    Message was edited by: planeski

    Robert,
    Attached are my sample code and cache config files. The code is a bit contrived-- it's extracted from a JUnit test case. Typically, we wouldn't re-use the same cache name in this way. What caught my eye however, was the fact that this same test case does not exhibit this behavior when running against a local cache directly (as opposed to a repl cache backed by a local cache.)
    Why call release? Well, again, when running this same test case against a local cache, you have to call release or it won't work. I figured the same applied to a repl cache backed by a local cache.
    Now that I understand this is more a byproduct of how my unit tests are written and not an issue with LRU eviction (as I originally thought), it's not a big deal-- more of a curiosity than a problem.
    Pete L.<br><br> <b> Attachment: </b><br>coherence-cache-config.xml <br> (*To use this attachment you will need to rename 545.bin to coherence-cache-config.xml after the download is complete.)<br><br> <b> Attachment: </b><br>LruTest.java <br> (*To use this attachment you will need to rename 546.bin to LruTest.java after the download is complete.)

  • SUP - 2 questions about the CDB (cache database)

    Hi,
    I have 2 questions about the cache database and the cache groups:
    1 - How does the "On demand" cache group policy exactly works? I know that online cache group is without storing any data on the CDB making direct requests to de backend from the device, the DCN is based on updating from the backend, the scheduled is based on a time period, but I don't understand how the "on demand" exactly works, and why it has a time period too.
    2 - Is it possible to query the cache database table to check the data that SUP has stored? How can I do this?
    Thank you!

    I posted a similar question in SUP Apps project not too long ago and  Paul Horan provided this useful reply:
    Create a "Sybase ASA v12.x for Unwired Server" connection profile in the Enterprise Explorer.  I named mine CDB.
    : Host = localhost (or whatever the machine name is)
    : Port = 5200
    : Database name = "default"
    : User Name = "dba"
    : Password = "sql"
    Obviously, change the userid/password to match, if you changed them during install time.
    Connect, and you'll see the "default" database displayed.
    Navigate down through the Tables folder, and the first subfolder is labeled something like [#should_delete_sk ...]  Start there.
    You'll see a bunch of tables with the naming convention "D1" + package name + package version + MBO name.  These are the cache tables for the MBOs.

  • Some questions about the integration between BIEE and EBS

    Hi, dear,
    I'm a new bie of BIEE. In these days, have a look about BIEE architecture and the BIEE components. In the next project, there are some work about BIEE development based on EBS application. I have some questions about the integration :
    1) generally, is the BIEE database and application server decentralized with EBS database and application? Both BIEE 10g and 11g version can be integrated with EBS R12?
    2) In BIEE administrator tool, the first step is to create physical tables. if the source appliation is EBS, is it still needed to create the physical tables?
    3) if the physical tables creation is needed, how to complete the data transfer from the EBS source tables to BIEE physical tables? which ETL tool is prefer for most developers? warehouse builder or Oracle Data Integration?
    4) During data transfer phase, if there are many many large volume data needs to transfer, how to keep the completeness? for example, it needs to transfer 1 million rows from source database to BIEE physical tables, when 50%is completed, the users try to open the BIEE report, can they see the new 50% data on the reports? is there some transaction control in ETL phase?
    could anyone give some guide for me? I'm very appreciated if you can also give any other information.
    Thanks in advance.

    1) generally, is the BIEE database and application server decentralized with EBS database and application? Both BIEE 10g and 11g version can be integrated with EBS R12?You, shud consider OBI Application here which uses OBIEE as a reporting tool with different pre-built modules. Both 10g & 11g comes with different versions of BI apps which supports sources like Siebel CRM, EBS, Peoplesoft, JD Edwards etc..
    2) In BIEE administrator tool, the first step is to create physical tables. if the source appliation is EBS, is it still needed to create the physical tables?Its independent of any soure. This is OBIEE modeling to create RPD with all the layers. If you build it from scratch then you will require to create all the layers else if BI Apps is used then you will get pre-built RPD along with other pre-built components.
    3) if the physical tables creation is needed, how to complete the data transfer from the EBS source tables to BIEE physical tables? which ETL tool is prefer for most developers? warehouse builder or Oracle Data Integration?BI apps comes with pre-built ETL mapping to use with the tools majorly with Informatica. Only BI Apps 7.9.5.2 comes with ODI but oracle has plans to have only ODI for any further releases.
    4) During data transfer phase, if there are many many large volume data needs to transfer, how to keep the completeness? for example, it needs to transfer 1 million rows from source database to BIEE physical tables, when 50%is completed, the users try to open the BIEE report, can they see the new 50% data on the reports? is there some transaction control in ETL phase?User will still see old data because its good to turn on Cache and purge it after every load.
    Refer..http://www.oracle.com/us/solutions/ent-performance-bi/bi-applications-066544.html
    and many more docs on google
    Hope this helps

  • Question about Kurts comments discussing the seperation of AIA & CDP - Test Lab Guide: Deploying an AD CS Two-Tier PKI Hierarchy - Kurt L Hudson MSFT

    Question about the sentence in bold. What is the meaning behind this comment?
    How would you separate the role of the AIA and CDP from a CA subordinate server? I can see where I add a CES and CEP server which has those as well, but I don't completely understand his comment. Because in this second step, (http://technet.microsoft.com/en-us/library/tlg-key-based-renewal.aspx)
    he shows how to implement CES and CEP.
    This is from the guide located at: http://technet.microsoft.com/library/hh831348.aspx
    Step 3: Configure APP1 to distribute certificates and CRLs
    In the extensions of the root CA, it was stated that the CRL from the root CA would be available via http://www.contoso.com/pki. Currently, there is not a PKI virtual directory on APP1, so one must be created.
    In a production environment, you would typically separate the issuing CA role from the role of hosting the AIA and CDP.
    However, this lab combines both in order to reduce the number of resources needed to complete the lab.
    Thanks,
    James

    My concern is, they have a 2-3k base of xp systems, over this year they are migrating them to Windows 7. During this time they will also be upgrading hardware for the existing windows 7 machines. The turnover of certificates are going to be high, which
    from what I've read here, it worries me.
    http://blogs.technet.com/b/askds/archive/2009/06/24/implementing-an-ocsp-responder-part-i-introducing-ocsp.aspx
    The application then can go to those locations to download the CRL. There are, however, some potential issues with this scenario. CRLs over time can get rather large
    depending on the number of certificates issued and revoked. If CRLs grow to a large size, and many clients have to download CRLs, this can have a negative impact on network performance. More importantly, by
    default Windows clients will timeout after 15 seconds while trying to download a CRL. Additionally,
    CRLs have information about every currently valid certificate that has been revoked, which is an excessive amount of data given the fact that an application may only need the revocation status for a few certificates. So,
    aside from downloading the CRL, the application or the OS has to parse the CRL and find a match for the serial number of the certificate that has been revoked.
    With the above limitations, which mostly revolve around scalability, it is clear that there are some drawbacks to using CRLs. Hence, the introduction of Online Certificate
    Status Protocol (OCSP). OCSP reduces the overhead associated with CRLs. There are server/client components to OCSP: The OCSP responder, which is the server component, and the OCSP Client. The OCSP Responder accepts status
    requests from OCSP Clients. When the OCSP Responder receives the request from the client it then needs to determine the status of the certificate using the serial number presented by the client. First the OCSP Responder determines if it has any cached responses
    for the same request. If it does, it can then send that response to the client. If there is no cached response, the OCSP Responder then checks to see if it has the CRL issued by the CA cached locally on the OCSP. If it does, it can check the revocation status
    locally, and send a response to the client stating whether the certificate is valid or revoked. The response is signed by the OCSP Signing Certificate that is selected during installation. If the OCSP does not have the CRL cached locally, the OCSP Responder
    can retrieve the CRL from the CDP locations listed in the certificate. The OCSP Responder then can parse the CRL to determine the revocation status, and send the appropriate response to the client.

  • 2 question about GPU and Lens correction ,cs5

    Hi
    i have 2 questions about Gpu and lens correction in Cs5
    1)Filter->lens correction->search online
    i get often and almost every connection time out at the first click on search online , at the second click i get no online profile
    is it normal?
    2) question is about Gpu
    it run faster , but talking about ajustament layer
    like saturation or vibrance for example
    i found with the gpu on , a light slow refresh compared with gpu off
    i have set cache  levels 6 ,history 20
    i guess are the defaul
    well i add a saturation layer and move the saturation slide ,increase o decrease saturation
    with Gpu Off , the changes are immedially , i mean i can see in real time the increase o decrease of saturation
    with Gpu On it takes a few(very few) time more
    again is normal ?
    don't be angry , i'm going to buy cs5 and i'm unsecure ... the price make a big role
    thanks

    For what it's worth, I also see a timeout on the first [ Search Online ] click, after about half a minute delay.  Second click turns up results immediately.  This happens each time Lens Correction is started, even without restarting Photoshop, and in both 32 and 64 bit versions.  Also note that I started with one profile listed by default (though from the wrong camera) for my 40D with 28-135 zoom.
    I alsow noticed that I was seeing progress bar activity in the Lens Correction dialog while I was typing this (even though Lens Correction was NOT the active window) every time I hit the 'L' key.  Strange.
    Windows 7 x64.
    -Noel

  • Question about Finder-Load-Beans flag

    Hi all,
    I've read that the Finder-Load-Beans flag could yield some valuable gains in performance
    but:
    1) why is it suggested to do individual gets of methods within the same Transaction
    ? (tx-Required).
    2) this strategy is useful only for small sets of data, isn't it? I imagine I
    would choose Finder-Load-Beans to false (or JDBC) for larger sets of data.
    3) A last question: its default value is true or false ?
    Thanks
    Francesco

    Because if there are different transactions where the get method is called
    then the state/data of the bean would most be reloaded from the database. A
    new transactions causes the ejbLoad method to be invoked in the beginning
    and the ejbStore at the end. That is the usual case but there are other ways
    to modify this behavior.
    Thanks
    Gaurav
    "Francesco" <[email protected]> wrote in message
    news:[email protected]...
    >
    Hi thorick,
    I have found this in the newsgroup. It's from R.Woolen answering
    a question about Finder-Load-Beans flag.
    "Consider this case:
    tx.begin();
    Collection c = findAllEmployeesNamed("Rob");
    Iterator it = c.iterator();
    while (it.hasNext()) {
    Employee e = (Employee) it.next(); System.out.println("Favorite color is:"+ e.getFavColor());
    tx.commit();
    With CMP (and finders-load-beans set to its default true value), thefindAllEmployeesNamed
    finder will load all the employees with the name of rob. The getFavColormethods
    do not hit the db because they are in the same tx, and the beans arealready loaded
    in the cache.
    It's the big CMP performance advantage."
    So I wonder why this performance gain can be achieved when the iterationis inside
    a transaction.
    Thanks
    regards
    Francesco
    thorick <[email protected]> wrote:
    1) why is it suggested to do individual gets of methods within thesame Transaction
    ? (tx-Required).I'm not sure about the context of this question (in what document,
    paragraph
    is this
    mentioned).
    2) this strategy is useful only for small sets of data, isn't it? Iimagine I
    would choose Finder-Load-Beans to false (or JDBC) for larger sets ofdata.
    >
    If you know that you will be accessing the fields of all the Beans that
    you get back from a
    finder,
    then you will realize a significant performance gain. If one selects
    100s or more beans
    using
    a finder, but only accesses the fields for a few, then there may be some
    performance cost.
    It could
    depend on how large some of the fields are. I'd guess that the cost
    of 1 hit to the DB per
    bean vs.
    the cost of 1 + maybe 1 more hit to the DB per bean, would usually be
    less. A performance
    test using
    your actual apps beans would be the only way to know for sure.
    3) A last question: its default value is true or false ?The default is 'True'
    -thorick

  • Questions about Real Application Testing(RAT)

    Hi All,
    We have a production database running on 10gR3 on a server with local drives, while we have a Oracle 11gR2 DB running on a server with NFS mounts (using S7310 - AmberRoad) i.e. Faster and better storage.
    We captured the load on 10gR3 and replayed it on 11gR2. We noticed the following:
    (1) Replay is considerably slow even though Oracle11gR2 instance has a faster storage. We suspect that it may be something to do with the buffer cache / SGA because there is nothing in cache on the target (we didn't shutdown 10gR2 DB for capture) – what should we do then?
    (2) To make sure that we can take the advantage of cache, we replayed the load 2nd time right after the 1st replay and everything ran to our surprise. So we are wondering how’s that possible since we did not restore the DB as we do not want to wipe off the cache (chicken and egg situation)? Does Oracle rollback the changes after the replay?
    (3) Do we have to restore database on Target every time we do replay? But if we do that then we won't have anything in the SGA.
    So we need your advise and also would like to know how everyone else is doing this testing?
    Regards,
    RJiv.

    DB Replay's workload capture facility allows you to either start capture from a closed (mounted) database (capture starts upon opening the DB), or to begin capture mid-stream during normal activity. Starting capture on the production system from a closed database eliminates the divergence in performance resulting from a primed cache, as well as possible data divergence issues from open, partially-completed transactions at the time the capture started.
    For many customers, it will clearly not be possible to close their database during peak periods (!!)
    One way to address the cache priming issue is to start capture in production from a closed state during a low period of activity, and the allow capture to run through the peak period.
    Another approach is to start capture mid-stream with the DB open and to run capture for a long period (long enough to stabilize the cache). When performing the replay, begin a new AWR snapshot after the cache has stabilized.
    Your question about running the replay again after the first replay is done is confusing. Of course you will not get meaningful data from that, since replay must begin from the capture start SCN. If you run replay twice in a row without reverting the database to the capture start SCN, it will be applying meaningless changes to a database in a state that is unlike that of the original. You will be testing the data errors codepath instead of real performance.
    It is typical to enable database flashback on the repay database so that it can be repeatedly reverted to the capture start SCN for testing under a variety of scenarios.
    Regards,
    Jeremiah Wilton
    Blue Gecko, Inc.
    http://www.bluegecko.net

  • Some question about 9iAs R1.2.2 & R2, need your help:

    Some question about 9iAs R1.2.2 & R2, need your help:
    Since 2000, we has used 9iAS Core (R1.2.2) to publish our website. The platform is Suns Solaris (SPARC). The database is Oracle 8i. But there are so many questions:
    1.     The Web Cache cant be installed well
    2.     The web pages use JSP to query database with SQL show errors when we refresh the page a few times. The error will disappear after restarting the 9iAS or refreshing the page again. On the other hand, the same pages dont show error on resin. The error is java.sql.SQLException : Closed Connection: next. I supposed that the connection of database has some hidden troubles, but I can find it, could you give me some advice.
    Now I have installed 9iAS R2, But when I visited the manage page, I found that a password is needed to visit the web cache manage page. I dont think I have set the password, and then I cant control the web cache. I want to know is a default password occurred? If not, what is the password.

    The default password for Web Cache is: administrator
    See walkthrough on the sample code page: http://otn.oracle.com/sample_code/products/ias/content.html
    HTH,
    Ashesh Parekh
    Oracle9iAS product management

  • A question about DNS subdomain

    This is a question about DNS subdomain.
    The DNS server for the parent DNS domain is dns1.ours.com.
    The DNS server for the child/sub DNS domain is bee.child.ours.com.
    Configurations on dns1.ours.com:
    File: db.ours.com�F
    @IN SOA dns1.ours.com. postmaster.ours.com. (
    10051215 ; sn
    86400 ;refresh
    7100 ;retry
    777600 ;expire
    126000 ) ;min
    @ IN NS dns1.ours.com.
    dns1 IN A 210.x.x.15
    �c
    [color=Blue]child.ours.com. IN NS bee.child.ours.com.
    bee.child.ours.com. IN A 210.x.x.10[color]
    I did not changed anything in named.conf.
    Configurations on bee.child.ours.com:
    File db.child.ours.com:
    @ IN SOA bee.child.ours.com. test.child.ours.com (
    10051215 ; sn
    86400 ;refresh
    7100 ;retry
    777600 ;expire
    126000 ) ;min
    @ IN NS bee.child.ours.com.
    bee IN A 210.x.x.10
    test IN A 210.x.x.x
    File named.conf:
    options {
    directory "/var/named";
    zone "." {
    type hint;
    file "master/db.cache";
    zone "0.0.127.in-addr.arpa" {
    type master;
    file "master/db.0.0.127";
    zone "x.x.210.in-addr.arpa" {
    type master;
    file "master/db.child.ours.com.rev";
    zone "child.ours.com" {
    type master;
    file "master/db.child.ours.com";
    #nslookup
    Default Server: 210.x.x.10
    Address: 210.x.x.10
    // bee.child.ours.com: the DNS server for the child/sub DNS domain: child.ours.com
    www.ours.comServer: 210.x.x.10
    Address: 210.x.x.10
    *** localhost can't find www.ours.com: No response from server
    //failed to resolve A records in the parent domain, but can resolve A records in its own domain and other domains on the Internet.
    set type=ns
    ours.comServer: 210.x.x.10
    Address: 210.x.x.10
    Non-authoritative answer:
    ours.com nameserver = dns1.ours.com
    Authoritative answers can be found from:
    dns1.ours.com internet address = 210.x.x.15
    //find the DNS server for the parent domain
    > server 210.x.x.15
    // dns1.ours.com: the DNS server for the parent DNS domain: ours.com
    Default Server: dns1.ours.com
    Address: 210.x.x.15
    test.child.ours.comServer: dns1.ours.com
    Address: 210.x.x.15
    *** dns1.ours.com can't find test.child.ours.com: No response from server
    //failed to resolve A records in the child domain, but can resolve A records in its own domain and other domains on the Internet.
    set type=ns
    child.ours.comServer: dns1.ours.com
    Address: 210.x.x.15
    Non-authoritative answer:
    child.ours.com nameserver = bee.child.ours.com
    Authoritative answers can be found from:
    bee.child.ours.com internet address = 210.x.x.10
    //find the DNS server for the child domain
    > server 210.x.x.100
    // a public DNS server on the Internet
    Default Server: [210.x.x.100]
    Address: 210.x.x.100
    set type=a
    www.ours.comServer: [210.x.x.100]
    Address: 210.x.x.100
    Non-authoritative answer:
    Name: www.ours.com
    Address: 210.x.x.72
    //find the A record in the parent domain
    test.child.ours.comServer: [210.x.x.100]
    Address: 210.x.x.100
    Non-authoritative answer:
    Name: test.child.ours.com
    Address: 210.x.x.x
    //find the A record in the child domain
    I wonder why. It is BIND v8.2.2.
    Thanks.

    Hi AAnotherUser_,
    Based on your description, the internal domain name is different from the external domain name, and the web server is hosted internally. And the goal is that the internal user can
    access the web server by using an URL which include the MyCorp.com.
    In this scenario, internet users access your domain name by connecting to the WAN IP address of your router. However, to make the internal users access the website, you would need
    to create the external domain name as a zone on your internal DNS server.
    After creating the DNS zone, right click the zone you created, choose New Host Record.
    Type in the hostname, such as ‘www’, and provide the internal private IP address of your internal web server.
    For more details, please refer to Ace’s blog below, the
    Scenario 2: Different Internal and External but you are hosting the webserver internally
    http://blogs.msmvps.com/acefekay/2009/09/03/split-zone-or-no-split-zone-can-t-access-internal-website-with-external-name/
    Best Regards,
    Tina

  • AutoSPInstaller - Question about Search Topology

    Hello,
    I would deploy a new sharepoint infrastructure with autospinstaller and i have some questions about search topology.
    This infrastructure will host at end 2 EDM (10-20TO) and probably some user sites.
    I will provision 2 WFE and 2 APPS.
    Search Topology (4 servers) =>
    - Crawl Component :
    SRVA & SRVB
    - Query Component :
    SRVC & SRVD
    - Search Query and Site Settings Service :
    SRVA & SRVB
    - Admin Component :
    SRVA & SRVB
    - Index Component :
    SRVC & SRVD
    - Content Processing Component :
    SRVA & SRVB
    - Analytics Porcessing Component :
    SRVA & SRVB
    I read many articles about this subject but all were different !
    Anyone can apply my choice or make me a proposition ?

    Hi,
    I just want confirme that the infrastructure is correct before deploy it. Do i really need to isolate admin component, for now i put it on only one app server.
    For "Distributed caching", i planned to install it on both WFE servers.
    Thank you for your help.
    Jeremy

  • A few questions about how ZPM works.

    We have patch management (in ZCM 11.2.2), but honestly don't use it much. I have a few questions about how it works that might make me use it more, if I understand it more.
    If I deploy a patch (or a set of patches), it creates a bundle for that deployment. That bundle seems to include actions that deploy the actual patch bundles (correct?). Do I have to recreate a new deployment bundle every time I want to push a new patch? i.e. If I push a Java update, and a month later, a new one comes out, do I build out a new bundle with the new patch in it, or do I modify the old one?
    Once the patch is deployed, can I safely delete that deployment bundle, or should they just pile up?
    Is there a way to "auto-approve" patches? Lets say I always want a group of machines to have the latest Adobe Flash Player patches. Can I set up ZPM to automatically cache and push the latest patches for a specific product, or do I have to manually remediate each patch? (I'm thinking of how MS's WSUS does "auto-approval")
    I see most packages aren't cached in list, but occasionally, a patch is cached without me touching it. Why? Can I change what gets automatically cached?
    Thanks for any help/answers you can provide.
    -Adam

    Originally Posted by adrockk
    We have patch management (in ZCM 11.2.2), but honestly don't use it much. I have a few questions about how it works that might make me use it more, if I understand it more.
    If I deploy a patch (or a set of patches), it creates a bundle for that deployment. That bundle seems to include actions that deploy the actual patch bundles (correct?). Do I have to recreate a new deployment bundle every time I want to push a new patch? i.e. If I push a Java update, and a month later, a new one comes out, do I build out a new bundle with the new patch in it, or do I modify the old one?
    Once the patch is deployed, can I safely delete that deployment bundle, or should they just pile up?
    Is there a way to "auto-approve" patches? Lets say I always want a group of machines to have the latest Adobe Flash Player patches. Can I set up ZPM to automatically cache and push the latest patches for a specific product, or do I have to manually remediate each patch? (I'm thinking of how MS's WSUS does "auto-approval")
    I see most packages aren't cached in list, but occasionally, a patch is cached without me touching it. Why? Can I change what gets automatically cached?
    Thanks for any help/answers you can provide.
    -Adam
    For #1, (assuming you're not using baselines), you would check the new version of the patch (vulnerability) and do a new deployment.
    #2 - once you're satisified that the machines are deployed (or the best to your ability) you can delete the DEPLOYMENT package. It doesn't delete the actual vulnerability bundles to my knowledge. That's why it's a good idea to name your bundle deployments with something meaningful, IMO (and maybe include a nice desription).
    #3 - currently I don't believe this is possible. I know you can probably configure it to auto-download the patches, but not auto-deploy everything. Given the propensity for software to wreck other things (hello MS .NET patches), this is probably not a good idea. At least I'd never auto-download and auto-deploy any patches without testing them first, and certainly take my servers a little more cautiously than my workstations.
    #4 - I think you can configure what's cached, but I could be wrong.
    I know there's a lot of improvements coming in the pipeline, and it doesn't hurt to "vote" for your enhancements via the enhancement system (more work for Shaun--haha)
    --Kevin

Maybe you are looking for