HttpSession across application in the cluster
I have two question:
Can I deploy two webapplication in the cluster, one that is clustered and
the second one no clustered?
Assuming I can do this, can I keep a HttpSession across these two
applications?
Christian Corcino
Sharing a session across webapp's is prohibited by the servlet spec. You
could store it in a db and have the other web app access it.
Christian Corcino wrote:
> I have two question:
> Can I deploy two webapplication in the cluster, one that is clustered and
> the second one no clustered?
> Assuming I can do this, can I keep a HttpSession across these two
> applications?
>
> Christian Corcino
Rajesh Mirchandani
Developer Relations Engineer
BEA Support
Similar Messages
-
Issue with LCM while migrating planning application in the cluster Env.
Hi,
Having issues with LCM while migrating the planning application in the cluster Env. In LCM we get below error and the application is up and running. Please let me know if anyone else has faced the same issue before in cluster environment. We have done migration using LCM on the single server and it works fine. It just that the cluster environment is an issue.
Error on Shared Service screen:
Post execution failed for - WebPlugin.importArtifacts.doImport. Unable to connect to "ApplicationName", ensure that the application is up and running.
Error on network:
“java.net.SocketTimeoutException: Read timed out”
ERROR - Zip error. The exception is -
java.net.SocketException: Connection reset by peer: socket write error
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)Hi,
First of all, if your environment for source and target are same then you will have all the users and groups in shared services, in that case you just have to provision the users for this new application so that your security will get migrated when you migrate the from the source application. If the environs are different, then you have to migrate the users and groups first and provision them before importing the security using LCM.
Coming back to the process of importing the artifacts in the target application using LCM, you have to place the migrated file in the @admin native directory in Oracle/Middleware/epmsystem1.
Open shared services console->File system and you will see the your file name under that.
Select the file and you will see all your exported artifacts. Select all if you want to do complete migration to target.
Follow the steps, select the target application to which you want to migrate and execute migration.
Open the application and you will see all your artifacts migrated to the target.
If you face any error during migration it will be seen in the migration report..
Thanks,
Sourabh -
Access local SSB across applications in the same instance
Is it possible for application X to call a local stateless session bean method in application Y and application Z where X, Y, and Z are deployed under the same instance?
Our scenario requires application X to call local methods in any number of other applications installed under the same instance. The methods can't be remote because parameters we're passing cannot be serialized. So far, nothing is working. Any help would be greatly appreciated."Madhu Jatheendran" <[email protected]> wrote in message
news:[email protected]..
>
Hi , I have the following points needing clarifications.
1) I have 2 Enterprise Archive files holding 2 separate applications onWeblogic
instance (6.1 SP4).Is there any document that describes how I wouldperform SSO
across these 2 applications.Would I have to use the exploded format ofjust deployment
as EARs would do?Is there any documentation which describes the same?WouldI need
to
a) recreate the session information
2)store session information in a temporary storage location and thenretrieve
the same when hopping from one application to the other?
2) How would the same situation be handled when I have a clusteredenvironment,
wherein I need to not only ensure that I have SSO accross 2 EARs in thesame instance
but also across different instances on separate machines.
You may want to ask about sesssion stuff in the
weblogic.developer.interest.servlet
newsgroup. -
Will the Application Scope be shared across the cluster in a multi-node OC4
Hi,
I have the following requirement:
Users of the application can only have single (browser) session. When a user who already has a session connects again, he should no longer be allowed to access the older session.
My proposed implementation is:
- After successful login – possibly using a Session Listener - an entry is made in a HashMap UserSessions that lives in the application scope. Key is the username, value is the session id (HttpSession.getId()).
- For every request, using a ServletFilter, we check whether the session is still in the UserSessions HashMap for the current user. If a new session has been created for the same user, the session id for that new session is in the UserSessions map and the servletfilter will not find the session. In that case, the filter should invalidate the session and forward to the user to an error page.
However, the application will run on a multi-node OC4J cluster. I am starting to wonder:
Will the Application Scope be shared across the cluster in a multi-node OC4J environment?
I know session state can be shared. But what application state/scope?
Does anyone know? Do I have to do anything special in the cluster or the application to get this to work?
Thanks for your help.
Lucasgday Lucas --
Application scope is not replicated across JVM boundaries with OC4J.
I'm sure this used to be described in the doc, but I can't find it now from a quick scan.
If you wanted to use this type of pattern, you could look to use a Coherence cache as distribution mechanism to share objects across multiple JVMs/nodes.
-steve- -
Common memory place across the cluster nodes
Hi All,
I am a websphere application server v6.1 user. I am running an application that uses a HashMap to store common information in the form of key value pairs. The application works fine in a single server environment but the same application fails in a cluster environment. This happens because the HashMap information will not be available for the cluster environment nodes which were running on a different JVM�s.
Could anybody suggest a good design where in I can use a common place to store the HashMap information like queue, database or any common memory area which is available across the cluster nodes? I am not really familiar with the memory facilities offered by websphere server. (The use of a central database is the worst case I prefer as the application makes several calls to the database resulting in a deadlock and 100% CPU utilization)
And also the values to the HashMap were added dynamically so the memory place should allow me to add my values dynamically during the runtime.
Please suggest is there any other way or any links to refer to achieve the above situation.
Thanks in advance
-Sandeep
Message was edited by:
km-sandeepFor a similar scenario we maintain a version flag in the DB based on which we would reload the hashmap.I'm too interested in finding out a design without DB.
-
Error reading Web application occurs when starting one server in the cluster
Hi All,
I have configured two cluster servers in the win 2000. Admin server also win
2000. One cluster server joins the cluster while other gives following error
and starts. But it seems that it does not join to the cluster view because
prevoius server only serves the requests. I have attached the log also.
<> <101062> <[HTTP synergyserver] Error reading Web application
"D:\bea\wlserver6.0\.\config\bd2kadmindomain\applications\.wl_temp_do_not_de
lete_synergyserver\wl_local_comp12863.war">
java.net.UnknownHostException: java.sun.com
at java.net.InetAddress.getAllByName0(InetAddress.java:571)
at java.net.InetAddress.getAllByName0(InetAddress.java:540)
at java.net.InetAddress.getAllByName(InetAddress.java:533)
at weblogic.net.http.HttpClient.openServer(HttpClient.java:159)
at weblogic.net.http.HttpClient.openServer(HttpClient.java:221)
at weblogic.net.http.HttpClient.<init>(HttpClient.java:85)
at
weblogic.net.http.HttpURLConnection.getHttpClient(HttpURLConnection.java:109
at
weblogic.net.http.HttpURLConnection.getInputStream(HttpURLConnection.java:30
1)
at java.net.URL.openStream(URL.java:798)
at
weblogic.apache.xerces.readers.DefaultReaderFactory.createReader(DefaultRead
erFactory.java:149)
at
weblogic.apache.xerces.readers.DefaultEntityHandler.startReadingFromExternal
Entity(DefaultEntityHandler.java:775)
at
weblogic.apache.xerces.readers.DefaultEntityHandler.startReadingFromExternal
Subset(DefaultEntityHandler.java:570)
at
weblogic.apache.xerces.framework.XMLDTDScanner.scanDoctypeDecl(XMLDTDScanner
.java:1131)
at
weblogic.apache.xerces.framework.XMLDocumentScanner.scanDoctypeDecl(XMLDocum
entScanner.java:2177)
at
weblogic.apache.xerces.framework.XMLDocumentScanner.access$0(XMLDocumentScan
ner.java:2133)
at
weblogic.apache.xerces.framework.XMLDocumentScanner$PrologDispatcher.dispatc
h(XMLDocumentScanner.java:882)
at
weblogic.apache.xerces.framework.XMLDocumentScanner.parseSome(XMLDocumentSca
nner.java:380)
at weblogic.apache.xerces.framework.XMLParser.parse(XMLParser.java:900)
at
weblogic.apache.xerces.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.ja
va:123)
at
weblogic.servlet.internal.dd.DescriptorLoader.<init>(DescriptorLoader.java:1
78)
at weblogic.servlet.internal.HttpServer.loadWARContext(HttpServer.java:446)
at weblogic.servlet.internal.HttpServer.loadWebApp(HttpServer.java:404)
at weblogic.j2ee.WebAppComponent.deploy(WebAppComponent.java:74)
at weblogic.j2ee.Application.addComponent(Application.java:133)
at weblogic.j2ee.J2EEService.addDeployment(J2EEService.java:115)
at
weblogic.management.mbeans.custom.DeploymentTarget.addDeployment(DeploymentT
arget.java:327)
at
weblogic.management.mbeans.custom.DeploymentTarget.addDeployment(DeploymentT
arget.java:143)
at
weblogic.management.mbeans.custom.WebServer.addWebDeployment(WebServer.java:
76)
at java.lang.reflect.Method.invoke(Native Method)
at
weblogic.management.internal.DynamicMBeanImpl.invokeLocally(DynamicMBeanImpl
.java:562)
at
weblogic.management.internal.DynamicMBeanImpl.invoke(DynamicMBeanImpl.java:5
48)
at
weblogic.management.internal.ConfigurationMBeanImpl.invoke(ConfigurationMBea
nImpl.java:285)
at com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1555)
at com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1523)
at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:439)
at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:180)
at $Proxy40.addWebDeployment(Unknown Source)
at
weblogic.management.configuration.WebServerMBean_CachingStub.addWebDeploymen
t(WebServerMBean_CachingStub.java:1012)
at
weblogic.management.mbeans.custom.DeploymentTarget.addDeployment(DeploymentT
arget.java:313)
at
weblogic.management.mbeans.custom.DeploymentTarget.addDeployments(Deployment
Target.java:277)
at
weblogic.management.mbeans.custom.DeploymentTarget.updateServerDeployments(D
eploymentTarget.java:232)
at
weblogic.management.mbeans.custom.DeploymentTarget.updateDeployments(Deploym
entTarget.java:192)
at java.lang.reflect.Method.invoke(Native Method)
at
weblogic.management.internal.DynamicMBeanImpl.invokeLocally(DynamicMBeanImpl
.java:562)
at
weblogic.management.internal.DynamicMBeanImpl.invoke(DynamicMBeanImpl.java:5
48)
at
weblogic.management.internal.ConfigurationMBeanImpl.invoke(ConfigurationMBea
nImpl.java:285)
at com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1555)
at com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1523)
at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:439)
at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:180)
at $Proxy0.updateDeployments(Unknown Source)
at
weblogic.management.configuration.ServerMBean_CachingStub.updateDeployments(
ServerMBean_CachingStub.java:2299)
at
weblogic.management.mbeans.custom.ApplicationManager.startConfigManager(Appl
icationManager.java:240)
at
weblogic.management.mbeans.custom.ApplicationManager.start(ApplicationManage
r.java:122)
at java.lang.reflect.Method.invoke(Native Method)
at
weblogic.management.internal.DynamicMBeanImpl.invokeLocally(DynamicMBeanImpl
.java:562)
at
weblogic.management.internal.DynamicMBeanImpl.invoke(DynamicMBeanImpl.java:5
48)
at
weblogic.management.internal.ConfigurationMBeanImpl.invoke(ConfigurationMBea
nImpl.java:285)
at com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1555)
at com.sun.management.jmx.MBeanServerImpl.invoke(MBeanServerImpl.java:1523)
at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:439)
at weblogic.management.internal.MBeanProxy.invoke(MBeanProxy.java:180)
at $Proxy9.start(Unknown Source)
at
weblogic.management.configuration.ApplicationManagerMBean_CachingStub.start(
ApplicationManagerMBean_CachingStub.java:435)
at weblogic.management.Admin.startApplicationManager(Admin.java:1033)
at weblogic.management.Admin.finish(Admin.java:493)
at weblogic.t3.srvr.T3Srvr.start(T3Srvr.java:429)
at weblogic.t3.srvr.T3Srvr.run(T3Srvr.java:170)
at weblogic.Server.main(Server.java:35)
Each cluster servers domain names are different i.e. not "mydomain". The
file its complaining is in the specified directory and it has proper
priviledges.
If anyone has an idea please respond.
Thanks
Nalika
[synergyserver.log]
You're getting that probably because the WL instance was not shut down properly.
If that's the case, you'll need to remove an ldap lock file (with a .lok extension) in the directory of ldap files (under the server dir) . -
Error While open the application in a Cluster on HFM Client
Hi,
I have added Cluster in Clinet configuration in HFm, after adding the cluster successfully i have logged into finacial management client then i have selected the Cluster and it opens pop up windows which we need to Select the temp folder for that application
my problem starts here after givingthe folder it thorws me an error( some times we were able to select the application and assigned the folder with out any issues, but some times we could not)
we tried with different credentials but no luck.
Here is the Error
"Could not Authenticate the specified user"
Error Reference Number: {5895D16C-F1B9-475B-A3A4-F577D1A28449}
Num: 0x8004021a;Type: 0;DTime: 3/6/2012 9:30:02 AM;Svr: HFM Client machine;File: CHsxClient.cpp;Line: 2343;Ver: 11.1.1.1.0.2120;ExErr: Unknown Error;
Can any body please help me!
Thanks
Ashok
Edited by: Ashok on Mar 6, 2012 6:40 AMHi Ashok,
Please check the below link which might resolve the issue.
Financial Management Error "There was some communication error" (Doc ID 1280613.1)
Hope this helps,
Thank you,
Charles Babu J -
The name 'rolename' is in use by the cluster already as network name or application name
I removed windows cluster ISCSI target server few days back since it was not needed and now its needed as we need since its a clustered storage space and we need this role for ISCSI storage to external server. Now when I add the role again I get this error
The name winiscsi is in use by the cluster already as network name or application name
I double check role is not installed . I even rebooted both nodes
adHi Adnan-Vohra,
Could you post the original error information or the screenshot about this error, I can’t find out any similar error explain, we can not install the iSCSI target on any cluster
node we need the separate server as the shared storage.
Failover Clustering Hardware Requirements and Storage Options
https://technet.microsoft.com/en-us/library/jj612869.aspx
I’m glad to be of help to you!
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected] -
Synchronization of memberLeaving event across the cluster
Hi,
Let's assume that we have 2 nodes and one of them is leaving the cluster (but will continue to operate alone). Both nodes use a MemberListener to react to the memberLeaving(MemberEvent) event. Moreover, in our case as part of the leaving event logic we need to send a lot of messages from one node to the other node. That is why we need to have both nodes be part of the cluster until the leaving event is done in all nodes.
So my question is: Will be node leaving the cluster wait to actually leave the cluster once all the other nodes (and itself too) finished processing the memberLeaving event?
In case the answer is no I was thinking that the way to ensure a proper clean up would be to do all the clean up logic in the node leaving the cluster but that would imply bringing all the shared/cached data to the leaving node and that wouldn't be a scalable operation.
Ideas are welcome. :)
Thanks,
-- GatoHi Gato,
The memberLeaving event is an asynchronous notification raised on a node B when another node A issues a programmatic Service.shutdown() or Cluster.shutdown() call. There is no way for code running on B to block or delay the shutdown process on the node A.
If you want to "negotiate" the conditions of a clean shutdown between cluster nodes, you would need to use a "satellite" InvocationService to communicate all necessary information and actions across the cluster before initiating the shutdown sequence.
Regards,
Gene -
Clinet Application without joining the cluster
Hi All,
is it possible for client application to access the cache within a coherence cluster. The Client application is not a part of cluster and it didnt start with and cache config files or anything else.
The client application just uses :
NamedCache cache = CacheFactory.getCache("VirtualCache");
if a client application starts with a cache-config file it will also join the cluster in this case the JVM of the client app will aslo be loaded/distributed/replicated with the cache contents ?
Please clarify my doubts.
Regards
Srinivas.The only clean way of NOT joining the cluster is to connect via Extend.
You can join the cluster, and specify LocalStorage=False parameter, however, that is only applicable for distributed cache. Replicated cache data still exists on every node. A bigger issue in my mind, is that your node will be actively managing membership of other members in the cluster, and that can become a problem.
Timur -
How to know the Cluster name for Hyperion Planning Application
Hi,
We have 2 Hyperion planning clusters in our Planning environment. As per new requirement, we need to make the copy the existing planning application.
We are not able to get the defined cluster information for existing planning application.
Please help on how can we find the cluster name for a Hyperion Planning application.
Quick help will be appreciated!!
Thank You.
Mohit JainHi John,
Thanks for your quick response!!
We tried to run this command on Planning system command. but it did not return any record. We found that table "HSPSYS_APP_CLUSTER_DTL" does not have any record.
is there any other way, where we can check the planning cluster name.
Please help on this further.
Thank You.
Mohit Jain -
The Cluster not failover when i shutdown one managed server?
Hello, I created one cluster whit two managed servers, and deployed an application across the cluster, but the weblogic server gave me two url and two different port for access to this application.
http://server1:7003/App_name
http://server1:7005/App_name
When I shutdown immediate one managed server i lost the connection whit the application from this managed server, My question is, the failover and de load balancer not work, why??
Why two diferent address?
thank any helpWell you have two different addresses (URL) because those are two physical managed servers. By creating a cluster you are not automatically going to have a virtual address (URL) that will load balance requests for that application between those two managed servers.
If you want one URL to access this application, you will have to have some kind of web server in front of your WebLogic. You can install and configure Oracle HTTP Server to route requests to WebLogic cluster. Refer this:
http://download.oracle.com/docs/cd/E12839_01/web.1111/e10144/intro_ohs.htm#i1008837
And this for details on how to configure mod_wl_ohs to route requests from OHS to WLS:
http://download.oracle.com/docs/cd/E12839_01/web.1111/e10144/under_mods.htm#BABGCGHJ
Hope this helps.
Thanks
Shail -
JNDI bindings not being replicated across servers in a cluster
According to the Weblogic Server documentation, JNDI bindings are
automatically replicated across the servers in a cluster.
http://www.weblogic.com/docs45/classdocs/weblogic.jndi.WLContext.html#REPLIC
ATE_BINDINGS
This is not proving to be true in my testing. Perhaps it's "just broken", or
perhaps my cluster isn't correctly configured...
Here is a reproducible case. Install the following on two or more servers in
a cluster. You'll need to change line 26 of test1 to reference one of the
servers explicitly.
Test 1 works fine, since both web servers connect to the same server for
JNDI usage. The test should return the last host to hit the page, along with
the current host name. Alternating between servers in the cluster will
alternate the results. However, the fact that I'm specifically naming a
server in the cluster breaks the whole point of clustering -- if that server
goes down, the application ceases to function properly.
Test 2, which is supposedly the right way to do it, does not work, an error
message is logged:
Tue Jan 04 08:17:15 CST 2000:<I> <ConflictHandler> ConflictStart
lastviewhost:java.lang.String (from
[email protected]:[80,80,7002,7002,-1])
And then both servers begin to report that they have been the only server to
hit the page. Alternating between servers will have no effect -- both
servers are looking solely at their own copies of the JNDI tree. No
replication is occurring.
What is up with this? Any ideas?
Tim
[test1.jsp]
[test2.jsp]
1. yes
<JMSConnectionFactory AllowCloseInOnMessage="false"
DefaultDeliveryMode="Persistent" DefaultPriority="4"
DefaultTimeToLive="0"
JNDIName="xlink.jms.factory.commerceFactory"
MessagesMaximum="10" Name="xlink.jms.factory.commerceFactory"
OverrunPolicy="KeepOld" Targets="bluej,biztalk-lab,devtestCluster"/>
2. No I am just using the jndi name of the queue.
This is an example of how I send a message:
Context ctx = new InitialContext();
QueueConnectionFactory qconFactory;
QueueConnection qcon;
QueueSession qsession;
QueueSender qsender;
Queue queue;
ObjectMessage msg;
qconFactory = (QueueConnectionFactory) ctx.lookup("xlink.jms.factory.commerceFactory");
qcon = qconFactory.createQueueConnection();
qsession = qcon.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
queue = (Queue) ctx.lookup("xlink.jms.queue.biztalk-lab.OrdrspImport");
qsender = qsession.createSender(queue);
msg = qsession.createObjectMessage(reportExecutorContainer);
qcon.start();
qsender.send(msg);
qsender.close();
qsession.close();
qcon.close();
3. I don't know those setting (wl 6.1sp7) -
The Cluster service is shutting down because quorum was lost
Hi, we recently experienced the above issue and after looking for explanations I haven't been able to find any satisfying answers when other people have posted this issue.
Our problem is as follows:
2 node 2008R2 cluster running SQL 2012
Each node is a HP BL460c running in a HP C7000 Blade Chassis.
We were updating the flexfabric cards on one of the chassis. The other chassis had been patched the previous week with no problems.
During the update process the flexfabric cards, which hold the Ethernet and FC connections, reboot so before work had begun all active cluster services had been failed over to the node in the chassis not being worked on. However despite this the cluster
service shut down on this one particular cluster. All other clusters running across these 2 chassis continued to run as expected.
As other people have posted before we saw the following errors in the system log.
1564: File share witness resource 'File Share Witness' failed to arbitrate for the file share
1069: Cluster resource 'File Share Witness' in clustered service or application 'Cluster Group' failed.
1172: The Cluster service is shutting down because quorum was lost. This could be due to the loss of network connectivity between some or all nodes in the cluster, or a failover of the witness disk.
Run the Validate a Configuration wizard to check your network configuration. If the condition persists, check for hardware or software errors related to the network adapter. Also check for failures in any other network components to which the node is connected
such as hubs, switches, or bridges.
However we cant understand what could cause this to happen when the service is running on the node in the chassis not being updated, especially when the same update was performed the week before with no issues. How can both nodes lose connectivity
to the File Share Witness at the same time?
Cluster Validation tests run fine and don't highlight any issues. The file share witness is accessible from both servers.Hi,
Please confirm you have install the Recommended hotfixes and updates for Windows Server 2008 R2 SP1 Failover Clusters update, especially the following hotfix.
The network location profile changes from "Domain" to "Public" in Windows 7 or in Windows Server 2008 R2
http://support.microsoft.com/kb/2524478/EN-US
A hotfix is available that adds two new cluster control codes to help you determine which cluster node is blocking a GUM update in Windows Server 2008 R2 and Windows Server
2012
http://support.microsoft.com/kb/2779069/EN-US
Hope this helps.
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
SSO - session time out while navigating across applications
Hi,
Problem statement
Handling session time out while navigating across applications involving SSO
Current approach
Application 1
1. Create session1.
2. URL rewrite the sesssion ID1 into the link refering to App2.
Application 2
1. Create session2
2. Get the session Id of App1.
3. send the session ID of App1 in the header
4. Invalidate the session2
Application 1
Get the ID from request and invoke getSession.
I'm having a very large session timeout at App1.
Is there a better approach. Ex: Having global session which is shared across multiple
webapplications."madhav" <[email protected]> wrote:
>
Hi,
Problem statement
Handling session time out while navigating across applications involving
SSO
Current approach
Application 1
1. Create session1.
2. URL rewrite the sesssion ID1 into the link refering to App2.
Application 2
1. Create session2
2. Get the session Id of App1.
3. send the session ID of App1 in the header
4. Invalidate the session2
Application 1
Get the ID from request and invoke getSession.
I'm having a very large session timeout at App1.
Is there a better approach. Ex: Having global session which is shared
across multiple
webapplications.
I have similiar problems in my system. What do you do if the session 1 times out
during ongoing operations in App 2 ?
Thanks
Kejuan
Maybe you are looking for
-
Parameters & Not all variables bound
select like this: select * from t_department where (:param is null and parent_id is null) or parent_id = :param OracleParameter Param = new OracleParameter("param",OracleDbType.NVarchar2,null,ParameterDirection.Input); dbCommand.Parameters.Add(Param)
-
Error while trying to sync audio and MIDI, sample rate 39100 recognised??
When I press record a message appears saying 'error while trying to syncronise audio and MIDI, sample rate 39100 recognised. Check conflict between garage band and external device. I'm using a Behringer UMC-202 interface. The inputs are selected as t
-
MyiPod classic won't charge. I just bought a new laptop with Windows 8 and when I plug in my iPod, iTunes will recognize it and sync, but will only charge during the syncing process. Any suggestions?
-
Creating/editing Excel files(.xls)
hi, does anyone knows java opensource library for produce/modify excel files? I need a very simple library because i'have not a lot of time for my activity. moreover i need to produce simple excel report from a ResultSet with simple basic formatting
-
Grub install option in Solaris Express 11
I am tryin to install OracleSolaris 11 express 201011 from live cd,but found some critical options missing(which are there in other linux/opensolaris distros) *Advanced Grub install option( whether to mbr or to the particular solaris install parition