EJB Not Retrieving database updates

Hello,
I have named queries in EJBs built of tables. For example:
@Entity
@NamedQueries({
@NamedQuery(name = "DadUser.findAll", query = "select o from DadUser o"),
@NamedQuery(name = "DadUser.findOne", query = "select o from DadUser o " +
                                              "where o.userId = :user_id")
@Table(name = "DAD_USER")
public class DadUser implements Serializable {
    @Column(name="ACTIVE_FLAG")
    private String activeFlag;
    @Column(name="FULL_NAME")
    private String fullName;
    private String secure;
    @Id
    @Column(name="USER_ID", nullable = false)
    private String userId;
    public DadUser() {
    getters and setters .....
        return activeFlag;
    My Session Facade:
@Stateless(name="dadFacade")
public class dadFacadeBean implements dadFacadeRemote, dadFacadeLocal {
    @PersistenceContext(unitName="Model")
    private EntityManager em;
    public dadFacadeBean() {
    public Object mergeEntity(Object entity) {
        return em.merge(entity);
    public Object persistEntity(Object entity) {
        em.persist(entity);
        return entity;
    /** <code>select o from DadUser o where o.userId = :user_id</code> */
    public DadUser queryDadUserFindOne(Object user_id) {
        return (DadUser) em.createNamedQuery("DadUser.findOne").setParameter("user_id", user_id).getSingleResult();
    }Backing bean:
@EJB
    private dadFacadeLocal df;
    public List<DadRequests> getUsers() {
        return df.queryDadUserFindOne('test');
    }If I make a change to a field on another form the changed data displays in the list. However, if I make a change directly in the Oracle database and commit the changed value, the data does not change in the JSF app unless I re-deploy it. I close the app and re-open and clear IE and the old value still displays. I have tried em.flush but nothing seems to refresh the list when Oracle is changed directly. Please HELP ! Thank you.
Edited by: GaryJSF on Jan 31, 2008 11:53 AM

Just a shot:
I don't know if You typed it right but you try to call:
public DadUser queryDadUserFindOne(Object user_id) {
        return (DadUser) em.createNamedQuery("DadUser.findOne").setParameter("user_id", user_id).getSingleResult();
    }which returns DadUser, and then put it into List of DadRequests
public List<DadRequests> getUsers() {
        return df.queryDadUserFindOne('test');
    }I don't see how can it work ;), but You asked about EJB,
The reason why EJB returns old (not updated) values is that entity bean is held on app server and cached there, if you would have changed data in DB via EJB module it would also update cache in entity beans container.
So I think that it has nothing to do with JSF :].

Similar Messages

  • HT1338 Not able to update Final Cut Pro X to latest version.

    Macbook Pro 15" 2.6gHz, Retina display notebook:
    When I try to update Final Cut Pro X (version 10.0.1) from within the software, the dialogue returns that all software is up to date.  I know for a fact that FCP X is now at version 10.0.5.  Spotlight is enabled for all system and software.  How do I updage my FCP X??  Help.  Thank you.

    Thank you for your reply.  I followed your suggestion:  Checked once more in the App Store for the Update to Final Cut Pro 10.0.4 and did NOT find it.  So, I deleted FCP X version 10.0.0 to the trash bin.  I emptied the Trash Bin.  Now when I go to the App Store, the only option I can find is to purchase a new application.  There is no where to simply download the application in the App Store that I can find.   I hope I have not been led to delete my working application and can not retrieve an update!!
    What do you further suggest at this juncture?  Thank you.
    Message was edited by: Protiusmime

  • The data from a table of a database are not retrieved in ADF BC Application

    I have two separate ADF BC applications. Each application retrieve
    the data from the same scheme of a database. Both applications work
    well when I launch them under embedded OC4J. I have deployed
    these applications to a standalone OC4J. When I launch one application,
    the data from a database's table are retrieved. When I launch the other
    application, the data from a database's table are not retrieved. Why? Please,
    help me.

    Any error message ?
    Frank

  • "The SPListItem being updated was not retrieved with all taxonomy fields." Exception while updating a taxaonomy field

    Hi All,
    I'm getting this exception "The SPListItem being updated was not retrieved with all taxonomy fields." when i try to programatically update a taxonomy field of a list item. Can any1 pls tell me why this exception occurs ???

    Recently hit this myself, as well.  Turns out it's a central admin setting that throttles the lookup return count, and Taxonomy fields are just lookups under the hood.
    Go into Central Administration, Manage Web Applications, select your web application, and then in the ribbon choose the dropdown under General Settings select Resource Throttling.  Find the setting for "List View Lookup Threshold" and raise
    it from the default 8 (can go up to 1000, but 20 is likely fine depending how many lookup fields you're pulling back in your SPListItem).

  • Using JNDI to make EJB Find MySQL DataBase

    Hello, I'm new to EJB, and I'm trying to develop a Entity EJB which will retrieve data from a simple MySQL table. Though I'm getting this exception: javax.naming.NoInitialContextException
    scale is the name of my DB.
    try {
        initialContext = new InitialContext();
        Object homeObject = initialContext.lookup( "java:comp/env/jdbc/scale" );
        exemploHome = ( EntityExemploHome )PortableRemoteObject.narrow( homeObject, EntityExemploHome.class );
      }catch(NamingException namingException){
        namingException.printStackTrace( System.err );
      }I dont figure how this simple String should make the EJB find my database, i dont know if I deployed wrong (Used the deploytool). Further the code od the EJB Impl Class:
    import java.sql.*;
    import java.rmi.RemoteException;
    import javax.ejb.*;
    import javax.sql.*;
    import javax.naming.*;
    public class EntityExemploEJB implements EntityBean{
      private EntityContext entityCont;
      private Connection con;
      private Integer condCNPJ;
      private String condNome;
      public Integer getCondCNPJ(){
        return condCNPJ;
      public void setCondNome( String nome ){
        condNome = nome;
      public String getCondNome(){
        return condNome;
      public Integer ejbCreate( Integer primaryKey )throws CreateException{
        condCNPJ = primaryKey;
        // INSERT
        try {
          Statement statement = con.createStatement();
          String insert = "INSERT INTO Condominio (condCNPJ) VALUES ("+condCNPJ.intValue()+")";
          statement.executeUpdate(insert);
          return condCNPJ;
        }catch ( SQLException sqlException ) {
          throw new CreateException( sqlException.getMessage());
      }//FIM ejbCreate
      public void ejbPostCreate( Integer primaryKey ) {}
      public void ejbStore() throws EJBException{// UPDATE
        try {
          Integer primaryKey = (Integer)entityCont.getPrimaryKey();
          Statement statement = con.createStatement();
          // create UPDATE statement
          String update = "UPDATE Condominio SET condNome WHERE condCNPJ = "+primaryKey.intValue();
          statement.executeUpdate(update);
        }catch(SQLException sqlException){
          throw new EJBException( sqlException );
      }//FIM ejbStore
      public void ejbLoad() throws EJBException{
        try {
          Integer primaryKey = (Integer) entityCont.getPrimaryKey();
          Statement statement = con.createStatement();
          String select = "SELECT * FROM Condominio WHERE condCNPJ = "+primaryKey.intValue();
          ResultSet resultSet = statement.executeQuery(select);
          if (resultSet.next()) {
            condCNPJ = new Integer( resultSet.getInt( "condCNPJ" ));
            condNome = resultSet.getString("condNome");
          }else
            throw new EJBException( "No such employee." );
        }catch ( SQLException sqlException ) {
          throw new EJBException( sqlException );
    }//FIM ejbLoad
    public Integer ejbFindByPrimaryKey(Integer primaryKey)throws FinderException, EJBException {
      try {
        Statement statement = con.createStatement();
        String select = "SELECT condCNPJ FROM Condominio WHERE condCNPJ = "+primaryKey.intValue();
        ResultSet resultSet = statement.executeQuery(select);
        if(resultSet.next()){
           resultSet.close();
           statement.close();
            return primaryKey;
        }//throw ObjectNotFoundException if SELECT produces no records
        else
          throw new ObjectNotFoundException();
      }catch ( SQLException sqlException ) {
        throw new EJBException( sqlException );
    }//FIM ejbFindByPrimaryKey
    public void setEntityContext( EntityContext context )throws EJBException{
      // set entityContext
      entityCont = context;
      try {
        InitialContext initialContext = new InitialContext();
        // get DataSource reference from JNDI directory
        DataSource dataSource = ( DataSource ) initialContext.lookup( "java:comp/env/jdbc/escala" );
        // get Connection from DataSource
        con = dataSource.getConnection();
      }catch ( NamingException namingException ) {
        throw new EJBException( namingException );
      }catch ( SQLException sqlException ) {
        throw new EJBException( sqlException );
    }//FIM setEntityContext
    public void unsetEntityContext() throws EJBException{
        entityCont = null;
        // close DataSource Connection
        try {
          con.close();
        }catch( SQLException sqlException ) {
          throw new EJBException( sqlException );
        }finally {
          con = null;
    }//FIM unsetEntityContext
    public void ejbPassivate(){
      condCNPJ = null;
    // get primary key value when container activates EJB
    public void ejbActivate(){
      condCNPJ = ( Integer ) entityCont.getPrimaryKey();
    public void ejbRemove() throws RemoveException{}
    }//Fecha o EJB

    Hi,
    Which server r u using? I think u have not included the server jar file in u r classpath. for weblogic server, the look up should be like this.
    Properties p = new Properties();
    p.put(Context.INITIAL_CONTEXT_FACTORY,"weblogic.jndi.WLInitialContextFactory");
    p.put(Context.PROVIDER_URL,"t3://localhost:7001");
    InitialContext ctx = new InitialContext(p);
    Object obj = ctx.lookup("HelloBean");
    regards
    jogesh

  • Claims debacle (error) with Term Store: "Could not retrieve a valid windows identity" for all sites in a particular web app.

    When I pull up the Term store in CA or any MySite collection, it works.
    When I do so in any other site collection (HNSCs, incidentally), It doesn't return any term stores.
    My ULS log immediately before and after the "/_vti_bin/taxonomyinternalservice.json/CheckPermission" POST on termstore .aspx triggers the WCF call:
    Claims Authentication af30y Verbose Claims Windows Sign-In: Successfully signed-in the the user 'contoso\domainUser' for request url 'https://sp13-root-prd.contoso.com/_vti_bin/taxonomyinternalservice.json/CheckPermission'.
    Claims Authentication af30q Verbose Updating header 'LOGON_USER' with value '0#.w|contoso\domainUser' for the request url 'https://sp13-root-prd.contoso.com/_vti_bin/taxonomyinternalservice.json/CheckPermission'.
    Authentication Authorization agb9s Medium Non-OAuth request. IsAuthenticated=True, UserIdentityName=0#.w|contoso\domainUser, ClaimsCount=77
    Logging Correlation Data xmnv Medium Site=/
    Topology e5mc Medium WcfSendRequest: RemoteAddress: 'http://CONTOSOFE3:32843/00e6d55691824965ac223f1d1cfae6d2/MetadataWebService.svc' Channel: 'Microsoft.SharePoint.Taxonomy.IMetadataWebServiceApplication' Action: 'http://schemas.microsoft.com/sharepoint/taxonomy/soap/IDataAccessReadOnly/GetChanges2' MessageId: 'urn:uuid:590e916c-c89a-4f89-9819-a82c97fabcaa'
    Claims Authentication bz7l Medium SPSecurityContext: Could not retrieve a valid windows identity for username 'contoso\domainUser' with UPN '[email protected]'. UPN is required when Kerberos constrained delegation is used. Exception: System.ServiceModel.FaultException`1[System.ServiceModel.ExceptionDetail]: WTS0003: The caller is not authorized to access the service. (Fault Detail is equal to An ExceptionDetail, likely created by IncludeExceptionDetailInFaults=true, whose value is: System.UnauthorizedAccessException: WTS0003: The caller is not authorized to access the service. at Microsoft.IdentityModel.WindowsTokenService.CallerSecurity.CheckCaller(WindowsIdentity callerIdentity) at Microsoft.IdentityModel.WindowsTokenService.S4UServiceContract.PerformLogon(Func`1 logonOperation, Int32 pid) at SyncInvokeUpnLogon(Object , Object[] , Object[] ) at System.ServiceModel.Dispatcher.SyncMethodInvoker.Invoke(Object instance, Object[] inputs, Object[]& outputs) at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc& rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc& rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage31(MessageRpc& rpc) at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet))..
    Claims Authentication g220 Unexpected No windows identity for contoso\domainUser.
    The "The caller is not authorized to access the service." message seems pertinent.
    Both web apps are using only NTLM auth.
    The url for both web apps ends in the same contoso.com domain. 
    I get the same errors no matter what account I use, including the install account.
    Things I've tried:
    Deleting and building a new HNSC root web app and site. Error happens in all sites in all web apps except the PBSC hosting MySites.
    Giving the root site app pool identity full control of the metadata service app (even though the MySite identitiy doesn't have it)
    Giving the root site app pool identity full permissions on the metadata service app.
    Comparing database and web app config permissions between dev (where everything works perfectly) and prod (where it does not).
    Made sure IIS auth settings on both sites are identical
    Both sites are using the same SSL certificate (though the call to the web service appears to be http)
    Reprovisioned the metadata service app with a new database and new app pool identity.
    Made sure C2WT is running. Tried it with the service stopped as well.
    Web.configs are identical between working and non-working apps.
    I'm stumped but still Googling. I'm hoping to avoid having to call Micrososft. Any help would be appreciated!
    UPDATE:
    Interestingly, when I restored the web application from backup (via CA), I ended up with 3 identical "Windows Authentication" authentication providers assigned to the problem web app. Since there was more than one, I was directed to the provider-chooser
    page when visiting the site. Upon choosing 1 of the 3, I was authenticated, and *poof*, no more authentication errors and the term store loaded term sets as expected.
    Of course, 3 providers was not an ideal state, so I grabbed the one that worked (#1) via get-spauthenticationprovider, and assigned it to the web app via set-spwebapplication, and my problem returned.
    I am currently updating the farm to SP1 from June 2013 CU. Fingers crossed.
    Update:
    The update to SP1 went smoothly, but did not resolve the issue. Also related (I believe) are the random authentication errors when trying to upload images to some libraries, and 401-errors on the accessdenied.aspx page itself.
    Update:
    The problem is resolved, seemingly after making 4 changes. I'm trying to narrow down which change was the cure, if any:
    I installed SP1 on all 6 servers, rebooted and upgraded. This appeared to have no effect.
    Removed an old login from SQL that no longer existed in AD because of this ULS error:
    System.Runtime.InteropServices.COMException: The user or group contoso\svc_xxxxxxxxx' is unknown., StackTrace:    at Microsoft.SharePoint.Utilities.SPUtility.GetFullNameFromLoginEx(String loginName, Boolean&
    bIsDL)
    This login was the identity of the application pool that used to run the web app in question.
    This login was the schema owner of a schema named after itself on every SharePoint database so I changed the schema owner to dbo but left the schema attached.
    The problem may have surfaced initially when the app pool identity was changed in CA, but went unnoticed?
    Note that the web app had been deleted and recreated many times with a new identity and pool to no avail, but the URL remained the same throughout each attempted fix. Relevant?
    Grasping at straws, I changed the app pool identity for this web app to the same one that runs the MySite web app pool as per this only slightly related problem: http://www.planetsharepoint.org/m/preview.php?id=372&rid=34764&author=Vlad+Catrinescu
    I changed the authentication method from NTLM to Negotiate.
    I am rolling back #3 and #4 to see if the issue resurfaces.
    Update:
    It doesn't appear to have been the NTLM/Negotiate setting. Web app is currently set to NTLM and all is well. No strange accessdenies, and term Store is still manageable from all sites.
    Update: Sorry for the delay. I am administering 6 farms these days. Will update as soon as the final phase of rollbacks happens.
    I think I can. I think I can.

    maybe that web app was accidentally created with classic auth?
    here's an example of how to create claims based, with classic, and then "doing 2013" claims
    #Create the example web application, as mentioned above, either with gui, and pick later, or
    New-SPWebApplication-ApplicationPool$applicationPool-ApplicationPoolAccount$serviceAcct-Name$WebApp-Port
    5050
    -databaseName$contentDB-securesocketslayer
    #If doing for 2013
    New-SPWebApplication-ApplicationPool$applicationPool-ApplicationPoolAccount$serviceAcct-Name$WebApp-Port
    5050
    -AuthenticationProvider(new-spauthenticationprovider)
    -databaseName$contentDB-secureSocketsLayer

  • Polling for database updates fails to update sequence file/table

    I have a small system to poll for changes to a database table of student details. It consists of:
    Database adapter, which polls the database for changes, retreives a changed row, and passes the generated XML to -
    "receive" router service, which simply passes the retrieved database XML data to -
    "execute" router service, which transforms the XML and passes the new message to -
    File Adapter, which writes the transformed XML to a file.
    The problem is that the database polling is not updating the sequence recorder - I have tried using a sequence file which stores the personnumber, a sequence table which stores the personnumber and a sequence table which stores the last-updated timestamp. In all cases the database adapter successfully reads the sequence file/table and retreives the correct row, based on the sequence value ( I have tested this by manually changing the sequence value), the data is transformed and a correct xml file is created, but the system then fails to update the sequence file/table so that when the next polling time comes around the very same database row is extracted again and again and again.
    In the ESB control panel I have one error: "Response payload for operation "receive" is invalid!"
    The Trace is:
    oracle.tip.esb.server.common.exceptions.BusinessEventRejectionException: Response payload for operation "receive" is invalid! at oracle.tip.esb.server.service.EsbRouterSubscription.processEventResponse(Unknown Source) at oracle.tip.esb.server.service.EsbRouterSubscription.onBusinessEvent(Unknown Source) at oracle.tip.esb.server.dispatch.EventDispatcher.executeSubscription(Unknown Source) at oracle.tip.esb.server.dispatch.InitialEventDispatcher.processSubscription(Unknown Source) at oracle.tip.esb.server.dispatch.InitialEventDispatcher.processSubscriptions(Unknown Source) at oracle.tip.esb.server.dispatch.EventDispatcher.dispatchRoutingService(Unknown Source) at oracle.tip.esb.server.dispatch.InitialEventDispatcher.dispatch(Unknown Source) at oracle.tip.esb.server.dispatch.BusinessEvent.raise(Unknown Source) at oracle.tip.esb.utils.EventUtils.raiseBusinessEvent(Unknown Source) at oracle.tip.esb.server.service.impl.inadapter.ESBListenerImpl.processMessage(Unknown Source) at oracle.tip.esb.server.service.impl.inadapter.ESBListenerImpl.onMessage(Unknown Source) at oracle.tip.adapter.fw.jca.messageinflow.MessageEndpointImpl.onMessage(MessageEndpointImpl.java:281) at oracle.tip.adapter.db.InboundWork.onMessageImpl(InboundWork.java:1381) at oracle.tip.adapter.db.InboundWork.onMessage(InboundWork.java:1291) at oracle.tip.adapter.db.InboundWork.transactionalUnit(InboundWork.java:1262) at oracle.tip.adapter.db.InboundWork.runOnce(InboundWork.java:501) at oracle.tip.adapter.db.InboundWork.run(InboundWork.java:401) at oracle.tip.adapter.db.inbound.InboundWorkWrapper.run(InboundWorkWrapper.java:43) at oracle.j2ee.connector.work.WorkWrapper.runTargetWork(WorkWrapper.java:242) at oracle.j2ee.connector.work.WorkWrapper.doWork(WorkWrapper.java:215) at oracle.j2ee.connector.work.WorkWrapper.run(WorkWrapper.java:190) at EDU.oswego.cs.dl.util.concurrent.PooledExecutor$Worker.run(PooledExecutor.java:819) at java.lang.Thread.run(Thread.java:595)
    The payload is:
    <PersonCollection xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/pcbpel/adapter/db/top/ISSSimIN">
    <Person>
    <name>Bob</name>
    <surname>Stupid</surname>
    <istatus>2</istatus>
    <active>Active</active>
    <personnumber>3001</personnumber>
    <role>Staff</role>
    <organization>TEX</organization>
    <updateTimestamp>2007-11-29T14:06:55.000+00:00</updateTimestamp>
    </Person>
    </PersonCollection>
    The payload above is the XML output from the Database adapter, the xsd for which was autogenerated by JDeveloper, so I don't understand how it can be invalid.
    I have only been working with ESB for 3 days, but half of that time has now been spent stuck on this issue. Anyone got any ideas??
    Richard

    You need to be careful. Submit and committing to the database are two different things.
    Submit on a page (and autoSubmit the property) are only posting the changes you typed into the field, back down to the application server. NOTHING is happening with the database at this point. So the applications's internal "cache" of records (using something like ADF BC) is holding the changes but the database doesn't know anything. So if you quit now, no changes will go to the database.
    These changes onlyget submitted down to the app server when you do some sort of action like press a button. Simply navigating fields will be doing nothing back to the app server. If you add autoSubmit then you will automatically submit the changes on that field down to the app server when you leave the field (but again, this is not touching the database)
    You have to explicitly add a COMMIT operation to save those changes to the database.
    Maybe if you try in a simple EMP example and send us your findings we can help direct you further.
    Regards
    Grant

  • Constructing Database Update DML Statements

    This is a very common problem I am sure but want to know how others handle it. I am creating a web application via Servlets and JSPs. I am working with Oracle 10g.
    Here's the problem/question.
    When I present an html form to a user to update an existing record, how do I know what has changed in the record to create the dml statement? I could just update the entire record with the values supplied via the POST data on submit but that does not seem right since maybe only 1 out of 10 fields actually changed. This would also write needless audit and logging information to the database. I have thought about comparing the Request parameters sent with the post with the original java bean used to populate the form in the first place by adding it back to the request as an attribute. Is this the only way to do it or is there a better way?
    Thanks everyone,
    -Brian

    What database drivers are you using? I recall there being some bug in
    sp2 that caused delayed transaction commits for a very specific
    combination of transaction settings and database drivers.
    I think it was third-party type II Oracle drivers and local
    transactions, but I'm not sure.
    In any case, I'd contact support as there is a patch available.
    David
    Gurjit wrote:
    Hi all,
    I have posted something of this sort earlier too. The problem is that
    the database is not reflecting what has been updated using queries from the
    application.
    THe structure of the application is as follows.
    DB = Oracel 8.1.6
    Web Server = iPlanet 4.0
    App Server = iPlanet 6.0 sp2
    The database is connected to via the EJB's. No Database transactions are
    being used except for Bean transactions which are container managed. The
    setAutoCommit flag is set to true. Each query is a transaction. The jsp's
    call these ejb's through wrapper classes. As stated earlier the database
    updates (Inserts, update, delete statements) are not getting reflected in
    the database immediately. The updates happen after a gap of about 7-10
    minutes. Why is this sort of behaviour coming up. This is almost like batch
    updates happening on the database. Is there a setting in IPlanet for
    removing this kind of problem.
    Regards,
    Gurjit

  • CVU ERROR:Unable to retrieve database release version

    Hi all, I want to install RAC on three nodes,but when I run "cluvfy" as ORACLE user, I got errors:
    ./runcluvfy.sh stage -pre crsinst -n nrac1,nrac2,nrac3 -verbose
    ERROR:
    Unable to retrieve database release version.
    Verification cannot proceed.
    but the ROOT user can run it with few errors , why?
    how can I do ?

    I was getting this same mysterious error immediately after a successful CRS install. The command I was running was "runcluvfy.sh stage -pre dbinst -n node1" from my stage directory. All previous executions worked correctly. My thought now is that the CRS install added JRE files that interrupt its paths; still not sure though.
    The MetaLink article referenced above was very helpful in the end but only after I paid special attention to the wording at the very bottom. Originally, I dismissed the article because it is primarily for Solaris; while we are using Red Hat AS 4 update 5. However, in the very last sentence it states the following:
    "Note: If you encounter same error for cluvfy stage -pre crsinst -n node on Linux x86-64 platform before CRS installation, please check if jdk 1.5 is installed. cluvfy works with jdk 1.4. Please install jdk 1.4 or install CRS and run cluvfy from CRS installation."
    Ah, we had also installed JSE 6. And sure enough executing 'java -version' showed that it was picking up the path to JSE 6 not the 1.4.
    So, following the note in the above article, I did two things. First, I changed the CV_HOME variable to $ORACLE_HOME/bin and ran 'which cluvfy' to verify the correct location. Second, I executed "cluvfy.sh stage -pre dbinst -n node1". It worked!

  • Smpatch analyze: Failure: Cannot connect to retrieve Database/current.zip:

    I get the following error when I run smpatch analyze:
    Failure: Cannot connect to retrieve Database/current.zip: Connection reset
    I was successfully able to register the system, but keep getting this error with smpatch analyze. I snooped the traffic and saw packets to & from getupdates1.sun.com on port 443, so there is not a firewall problem.
    Did I miss a step in the installation?
    Thanks

    Looks like the root cause (to me at least) is confusion between the contracted support server and the public updates server. They appear not to be the same...
    Since late September all my Sol 8 "smpatch" clients were broken, giving me "Response code was 500" because the LPS / Update Connection Proxy was set to https://getupdates1.sun.com/ which is apparently the public site. ( https://getupdates.sun.com/ didn't work either.)
    My Solaris 10 clients were working fine through the LPS, however - or so I thought.
    When I just noticed that the patchpro.patch.source on Solaris 8 with 124270-01 is showing https://updateserver.sun.com/solaris/ as the default, I tried that as the LPS patch source URL, and now everything works again - for the first time in weeks.
    Disclaimer: We are a support contract customer, so I don't know if this info applies to everyone. My LPS has been registered (has "entitlement") since I set it up in February, so it was a complete surprise that Solaris 8 smpatch clients started failing even after I put 124270-01 on the client and 121118-08 on the Sol 10 LPS.
    Hope this helps...
    -- Stefan

  • Cannot connect to retrieve Database/current.zip

    I just completed a patch upgrade on the Solaris 10 machine that runs our local patch
    server. It needed several hundred patches, as it hadn't been patched since September.
    After that, the behavior of our Solaris 10 patch clients has changed. For example, on one
    client a few days earlier `smpatch analyze' had reported 218 patches. Now it reports only
    these three:
    119255-27 SunOS 5.10_x86: Install and Patch Utilities Patch
    121119-08 SunOS 5.10_x86: Sun Update Connection System Client 1.0.8
    119789-07 Synopsis: SunOS 5.10_x86: Sun Update Connection Proxy 1.0.9
    Our clients currently use `http://xxxxxx.yyyy.tld:3816/solaris/' as their patch source.
    Dropping the `solaris/' results in this error, complete with HTML tags, from
    `smpatch analyze':
    Failure: Cannot connect to retrieve Database/current.zip: <h1>/ Not Found</h1><p>The resource identified by / could not be found.</p>
    With the original patch source, `smpatch update' does apply the three patches. The
    next `smpatch analyze' reports that no patches are required, which is certainly
    incorrect! By dropping the `solaris/' again, `smpatch analyze' reports 221 patches.
    Suddenly, patching has become a very complicated process. We have to patch,
    change the patch source, and patch again. There's no longer a way to use the installed
    `smpatch' to obtain a list of required patches or to download them all. What can be
    done to solve this problem?

    You should only need to change the patch source this one time for each Solaris 10 client (Solaris 9 clients still use the "solaris/" source), in future you should not have a problem.
    By patching the local patch server, the Sun Update Connection Proxy sofware has been upgraded to a newer version which now serves patches for Solaris 10 clients from the location 'http://xxxxxx.yyyy.tld:3816/'. This source is compatible with clients at a patch level of 121119-08 or greater, hence the 3 patches still available at `http://xxxxxx.yyyy.tld:3816/solaris/' allow for this level to be reached for clients on older revisions.

  • Failure: Cannot connect to retrieve Database/current.zip: Internal Server E

    Hello
    I have installed an update connection proxy and I have registered it
    smpatch get
    patchpro.backout.directory - ""
    patchpro.baseline.directory - /var/sadm/spool
    patchpro.download.directory - /var/sadm/spool
    patchpro.install.types - rebootafter:reconfigafter:standard
    patchpro.patch.source - https://getupdates1.sun.com/
    patchpro.patchset - current
    patchpro.proxy.host - ""
    patchpro.proxy.passwd **** ****
    patchpro.proxy.port - 8080
    patchpro.proxy.user - ""
    patchsvr setup -l
    Patch source URL: file:/patchsvr
    Cache location: /export/home/proxycashe
    I also did smpatch analyze > file
    and I downloaded the patches (*.jar) and copied them to
    /patchsvr/Patches
    But I don't really now from where to get the files: current.zip and detectors.jar
    I searched for them on the server and I found :
    /var/sadm/spool/cache/Database/https%3A%2F%2Fgetupdates1.sun.com%2F%2FDatabase%2Fcurrent.zip
    /var/sadm/spool/cache/https%3A%2F%2Fgetupdates1.sun.com%2F%2Fdetectors.jar
    and I copyed them in /patchsvr/Database and /patchsvr/Misc
    On an update connection client I did:
    #smpatch set patchpro.patch.source=http://192.168.0.71:3816/solaris/
    #smpatch analyze
    Failure: Cannot connect to retrieve Database/current.zip: Internal Server Error
    Can you please help me.
    Thanks.

    On the proxy I cannot execute:
    # rm /var/sadm/spool/patchsvr/*current.zip
    # rm /var/sadm/spool/patchsvr/*detectors.jar
    because /var/sadm/spool/patchsvr/ is empty.
    # patchsvr setup -l
    Patch source URL: file:/patchsvr
    Cache location: /export/home/proxycashe
    On the client:
    I removed the current.zip and detectors.jar
    # smpatch get
    patchpro.backout.directory - ""
    patchpro.baseline.directory - /var/sadm/spool
    patchpro.download.directory - /var/sadm/spool
    patchpro.install.types - rebootafter:reconfigafter:standard
    patchpro.patch.source http://192.168.0.71:3816/ https://getupdates1.sun.com/
    patchpro.patchset - current
    patchpro.proxy.host - ""
    patchpro.proxy.passwd **** ****
    patchpro.proxy.port - 8080
    patchpro.proxy.user - ""
    # smpatch analyze -C patchpro.log.level=3 -C patchpro.debug=true
    Effective proxy host : ""
    Effective proxy port : "8080"
    Effective proxy user : ""
    ... Submitting download request against a GUUS server
    ... ... Hostname of URL is 192.168.0.71
    ... ... Filename of URL is /xml/motd.xml
    ... ... File path portion of URL is /xml/motd.xml
    Defining request header : IF_MODIFIED_SINCE... valueThu Jan 01 02:00:00 EET 1970
    ... Caught IO Exception.
    ((HttpURLConnection)connection).getResponseCode() : 404
    ((HttpURLConnection)connection).getResponseMessage() : motd.xml not found
    Error: Unable to download document : "xml/motd.xml"
    Cannot connect to retrieve motd.xml: motd.xml not found
    Unable to display any message of the day notices from Sun Microsystems. Refer to the log file for additional information.
    Effective proxy host : ""
    Effective proxy port : "8080"
    Effective proxy user : ""
    ... Submitting download request against a GUUS server
    ... ... Hostname of URL is 192.168.0.71
    ... ... Filename of URL is /detector/detectors.jar
    ... ... File path portion of URL is /detector/detectors.jar
    Effective proxy host : ""
    Effective proxy port : "8080"
    Effective proxy user : ""
    Last Modified Date read: Thu Jan 01 02:00:00 EET 1970
    ... Submitting download request against a GUUS server
    ... ... Hostname of URL is 192.168.0.71
    ... ... Filename of URL is /database/current.zip
    ... ... File path portion of URL is /database/current.zip
    Defining request header : IF_MODIFIED_SINCE... valueThu Jan 01 02:00:00 EET 1970
    Key 1 : Content-Type = application/java-archive
    Key 2 : Content-Length = 3750071
    Key 3 : content-disposition = attachment; filename=detectors.jar
    Key 4 : Date = Thu, 26 Jul 2007 05:26:25 GMT
    Key 5 : Server = Apache Tomcat/4.0.5 (HTTP/1.1 Connector)
    Key 6 : Content-length = 3750071
    Defining request header : IF_MODIFIED_SINCE... valueThu Jan 01 02:00:00 EET 1970
    Key 1 : Content-Type = application/zip
    Key 2 : Content-Length = 323247
    Key 3 : content-disposition = attachment; filename=current.zip
    Key 4 : Date = Thu, 26 Jul 2007 05:26:26 GMT
    Key 5 : Server = Apache Tomcat/4.0.5 (HTTP/1.1 Connector)
    Key 6 : Content-length = 323247
    Last Modified Date value written to /var/sadm/spool/cache/Database/http%3A%2F%2F192.168.0.71%3A3816%2F%2FDatabase%2Fcurrent.zip.lmd is 0
    Date format for this value: Thu Jan 01 02:00:00 EET 1970
    Last Modified Date file updated : /var/sadm/spool/cache/Database/http%3A%2F%2F192.168.0.71%3A3816%2F%2FDatabase%2Fcurrent.zip.lmd
    Last Modified Date value written to /var/sadm/spool/cache/http%3A%2F%2F192.168.0.71%3A3816%2F%2Fdetectors.jar.lmd is 0
    Date format for this value: Thu Jan 01 02:00:00 EET 1970
    Last Modified Date file updated : /var/sadm/spool/cache/http%3A%2F%2F192.168.0.71%3A3816%2F%2Fdetectors.jar.lmd
    Effective proxy host : ""
    Effective proxy port : "8080"
    Effective proxy user : ""
    Effective proxy host : ""
    Effective proxy port : "8080"
    Effective proxy user : ""
    ... Submitting download request against a GUUS server
    ... ... Hostname of URL is 192.168.0.71
    ... ... Filename of URL is /entitlement/
    ... ... File path portion of URL is /entitlement/
    Defining request header : IF_MODIFIED_SINCE... valueThu Jan 01 02:00:00 EET 1970
    ... Caught IO Exception.
    ((HttpURLConnection)connection).getResponseCode() : 404
    ((HttpURLConnection)connection).getResponseMessage() : default not found
    Error: Unable to download entitlement information using the update server proxy.
    Cannot connect to retrieve : default not found
    # ls -laR /var/sadm/spool/cache
    /var/sadm/spool/cache:
    total 7358
    drwxr-xr-x 6 root sys 512 Jul 26 08:25 .
    drwxr-xr-x 3 root sys 512 Jul 19 00:25 ..
    drwxr-xr-x 2 root root 512 Jul 26 08:25 Database
    drwxr-xr-x 2 root root 512 Jul 26 08:26 entitlement
    -rw-r--r-- 1 root root 3750071 Jul 26 08:25 http%3A%2F%2F192.168.0.71%3A3816%2F%2Fdetectors.jar
    -rw-r--r-- 1 root root 14 Jul 26 08:25 http%3A%2F%2F192.168.0.71%3A3816%2F%2Fdetectors.jar.lmd
    drwxr-xr-x 3 root sys 512 Jul 19 00:35 updatemanager
    drwxr-xr-x 2 root root 512 Jul 26 08:26 xml
    /var/sadm/spool/cache/Database:
    total 662
    drwxr-xr-x 2 root root 512 Jul 26 08:25 .
    drwxr-xr-x 6 root sys 512 Jul 26 08:25 ..
    -rw-r--r-- 1 root root 323247 Jul 26 08:25 http%3A%2F%2F192.168.0.71%3A3816%2F%2FDatabase%2Fcurrent.zip
    -rw-r--r-- 1 root root 14 Jul 26 08:25 http%3A%2F%2F192.168.0.71%3A3816%2F%2FDatabase%2Fcurrent.zip.lmd
    /var/sadm/spool/cache/entitlement:
    total 4
    drwxr-xr-x 2 root root 512 Jul 26 08:26 .
    drwxr-xr-x 6 root sys 512 Jul 26 08:25 ..
    /var/sadm/spool/cache/updatemanager:
    total 6
    drwxr-xr-x 3 root sys 512 Jul 19 00:35 .
    drwxr-xr-x 6 root sys 512 Jul 26 08:25 ..
    drwxr-xr-x 2 root sys 512 Jul 19 00:35 analysis.results
    /var/sadm/spool/cache/updatemanager/analysis.results:
    total 4
    drwxr-xr-x 2 root sys 512 Jul 19 00:35 .
    drwxr-xr-x 3 root sys 512 Jul 19 00:35 ..
    /var/sadm/spool/cache/xml:
    total 4
    drwxr-xr-x 2 root root 512 Jul 26 08:26 .
    drwxr-xr-x 6 root sys 512 Jul 26 08:25 ..

  • Retrieve and update dynamic data

    Dear
    Please advise if ECM provides the ability to retrieve and update dynamic data on demand to ensure the user’s content is current
    Thanks
    M.E.
    Edited by: user803214 on Jan 31, 2011 5:34 AM

    The Question is quite vague. Very hard to answer a question when you do not know what they are asking.
    As to automatic indexing and updates of data to files and the impact of indexing:
    A file checked into UCM server is indexed based upon the system setting. A server could be set to DATABASE.METADATA indexing or some kind of full text indexing. Either way if a file is checked into the Content Server the file gets indexed. If any metadata is changed by a later update the index is indeed updated automatically. If the file in a full text indexed system has a set of text edited or a totally different file is checked in the index is updated automatically also.

  • Can send but not retrieve emails using Thunderbird and Windows 8.1

    Hi, I am not able to get Thunderbird to retrieve emails on a new laptop.
    Last week I purchased a new laptop due to an imminent hard disk failure on my old laptop. I will call them "Old" and "New" laptops. (Old laptop is still working for the moment...)
    The Old laptop is nearly 3 years old, and I have been using Thunderbird on it since new without any problems.
    I have downloaded and installed Thunderbird on the New laptop, and can not get it to retrieve emails.
    As the Old laptop is still working (at the moment), I am able to directly visually compare and check settings on both computers.
    THE PROBLEM
    Using Thunderbird on the New laptop, I am able to send emails, I can see all my old emails restored from the Old laptop using Mozbackup, but clicking Get Mail on the New laptop does not retrieve new
    emails.
    When I click Get Mail, the status message in the bottom left corner says "connected to .....<ISP>..." but does not actually get the emails. However, the Old laptop is still able to get emails as it always has.
    My ISP is a large Australian telco that I have used for many years and I have not changed anything with my account. The ISP is POP3.
    WHAT HAVE I TRIED
    I deleted the email account created by restoring from Mozbackup, and attempted using the Thunderbird new account wizard, however Thunderbird did not recognise my ISP and would not create the account - even though it is working perfectly at the same time on the Old laptop.
    All Firewalls I can find have been configured to allow Thunderbird, with the same configuration settings on both machines.
    How do I know I can send but not Get emails from Thunderbird on the New laptop?
    I have sent test emails to myself from the New laptop. Thunderbird sends successfully, I am able to preview the test emails via Mailwasher on the New laptop, and can read, reply, etc, to the emails if I log into my ISP webmail service. However I can NOT get Thunderbird to retrieve and download these emails on the New laptop. If I go to the Old laptop, Thunderbird works perfectly, retrieves and downloads all new emails, including these test emails.
    I am now at the point where I do not know what else to try and would appreciate any suggestions.....
    I have spent many hours over the last 3 days surfing the net, forums, Mozilla Support, etc, and tried everything suggested, without success. The only thing I have NOT tried is starting windows in Safe mode (as per some suggestions), as it is not a viable long term solution.
    OTHER INFO
    Computer Details;-
    "Old" laptop;-
    Windows 7 Home Premium - continuously updated with all Critical and Important updates. 64 bit version.
    Antivirus = Bitdefender Total Security (subscription and licensed). Up to date.
    MailwasherPro 2012 (subscription and licensed). Up to date.
    Thunderbird 24.5.0 working perfectly. Up to date.
    "New" laptop;-
    Windows 8.1 - up to date with all Critical and Important updates. 64 bit version.
    Antivirus = Bitdefender Total Security (subscription and licensed) - downloaded and installed. Up to date.
    MailwasherPro 2012 (subscription and licensed) - downloaded and installed. Up to date.
    Thunderbird 24.5.0 - downloaded and installed.
    Note: I 'downloaded and installed' the Windows 8.1 version of these programs to avoid any potential compatibility problems restoring or copying from a Windows 7 backup version of the programs.
    1) New laptop came with pre-installed with McAfee - which has been Uninstalled due to running Bitdefender (Windows searches for 'McAfee' return nothing found suggesting the Uninstall was successful).
    2) Thunderbird has been added to both Bitdefender and Windows 8.1 firewalls on the New laptop.
    3) Bitdefender and Windows security settings are identical on both laptops - as near as I can determine, I may be missing something - but have checked all settings 3 times by doing a side-by-side visual comparison.
    4) Thunderbird settings are also identical and I am using the same email address and ISP that I have been using for many years.
    5) I migrated Thunderbird settings, accounts and emails from the Old laptop to the New laptop using Mozbackup, then visually checked and compared all settings in Thunderbird between the two machines.
    6) For info, I have used Mailwasher for many years as a simple way to preview and wash mail before running email programs such as Thunderbird - might be overkill, but I have also never had a virus or malware problem, so will keep doing this.
    7) I have Uninstalled and re-installed Thunderbird on the New laptop twice, and done the same with Bitdefender once.
    8) Any config changes I try on the New laptop, I either restart Thunderbird and / or restart the New laptop, or both.
    9) I have not created a Windows 8.1 account and am not using SkyDrive.
    10) Only other software I have installed on the New laptop is Google Chrome, Office Home 2013 and iTunes - all are working.
    Thanks in advance....

    Well there is not much left. If telnet works then the path is clear and there is no hardware firewall blocking ports.
    All that is left is software, so try windows safe mode with networking. A very handy way to eliminate software. it also actually disables most anti virus resident features and is preferred here because anti virus/security programs do not always disable or turn off when they say they have.
    You might also try the McAfee removal tool. https://community.mcafee.com/thread/52788 I notice the rep said yes to both, they did not say not required.
    Finally you could try creating a new profile, and see if it works just to exclude corruption of your existing profile. ( I doubt it but I am at the anything goes point)
    Just as an aside, I have never had an email borne nasty and I for many years never even had an email virus scanner. So yes I think your a bit over cautious with mailwasher, if that is what your using it for. Thunderbird does not support scripting languages or flash, or most plugins in the email sandbox. The result is getting a virus from a mail in Thunderbird is really only possible, for all practical purposes, if you open that password protected zip from the young Russian woman and that is what the anti virus should be blocking anyway when the temp file is written.

  • Data Services job rolling back Inserts but not Deletes or Updates

    I have a fairly simple CDC job that I'm trying to put together. My source table has a record type code of "I" for Inserts, "D" for deletes, "UB" for Update Before and "UP" for Update After. I use a Map_CDC_Operation transform to update the destination table based on those codes.
    I am not using the Transaction Control feature (because it just throws an error when I use it)
    My issue is as follows.
    Let's say I have a set of 10,000 Insert records in my source table. Record number 4000 happens to be a duplicate of record number 1. The job will process the records in order starting with record 1 and begin happily inserting records into the destination table. Once it gets to record 4000 however it runs into a duplicate key issue and then my try/catch block catches the error and the dataflow will exit. All records that were inserted prior to the error will be rolled back in the destination.
    But the same is not true for updates or deletes. If I have 10000 deletes and 1 insert in the middle that happens to be an insert of a duplicate key, any deletes processed before the insert will not be rolled back. This is also the case for updates.
    And again, I am not using Transaction Control, so I'm not sure why the Inserts are being rolled back, but more curiously Updates and Deletes are not being rolled back. I'm not sure why there isn't a consistent result regardless of type of operation. Does anyone know what's going on here or  what I'm doing wrong/what my misconception may be?
    Environment information: both source and destination are SQL Server 2008 databases and the Data Services version we use is 14.1.1.460.
    If you require more information, please let me know.

    Hi Michael,
    Thanks for your reply. Here are all the options on my source table:
    My Rows per commit on the table is 10,000.
    Delete data table before loading is not checked.
    Column comparison - Compare by name
    Number of loaders - 1
    Use overflow file - No
    Use input keys - Yes
    Update key columns - No
    Auto correct load - No
    Include in transaction - No
    The rest were set to Not Applicable.
    How can I see the size of the commits for each opcode? If they are in fact different from my Rows per commit (10,000) that may solve my issue.
    I'm new to Data Services so I'm not sure how I would implement my own transaction control logic using a control column and script. Is there a guide somewhere I can follow?
    I can also try using the Auto correct load feature.  I'm guessing "upsert" was a typo for insert? Where is that option?
    Thank you very much!
    Riley

Maybe you are looking for

  • Difference Between HashMap and HashTable

    Difference Between HashMap and HashTable Please explain with an example

  • [Solved]Lighttpd and php/fcgi

    I installed and configured lighttpd and php/fcgi exactly as found http://wiki.archlinux.org/index.php/Lig … nd_Non-SSL except for: ssl and eaccelerator. I am also running php with the lighttpd user instead of phpuser. I have done chmod 775 for /var/r

  • Best Practice for using JSP in OBPM 10gr3

    Hi Guys, We are using JSP for UI in our project. We are new to the OBPM. wants to clarify some things. 1. is it good to have jsp in screenflows or if we can manage it outside BPM also with same effect 2. How is the BPM database structured. Is the PAP

  • ITunes 10 Album Artwork Not Showing Up In Screen Saver

    After installing iTunes 10, my screen saver settings (which used to pull from my iTunes Album Artwork) are no longer working. The screen saver option now says "Your iTunes Library does not contain any songs with artwork" yet it does. The screen saver

  • Ac power cord for pavilion g4 laptop

    I have a pavillion g4-1015dx lap top with a bad power cord.  Where do I find a replacement?