Mounting Remote Shares with the Same Name
I am accessing shares on remote servers and can mount volumes with no problems initially, using command+k and setting the Server Address to smb://ServerNameOrIP/SharePoint. This mounts a volume with the name of the share point. The problem I've run into is at work I need to mount 2 volumes on 2 different servers with the same share point name. These are shared servers, and for unrelated reasons the folder names on either server can not be changed. This doesn't work well for me. It creates the first volume with the name "Share" and a second one that in finder appears to be called "Share" as well, but if I use "Get Info" or in Terminal run ls /Volumes I can see that it actually gets mounted as "Share-1".
Is it possible to change the name of one (or both) of these mounts? Mostly so I can see which one I'm on when browsing though Finder. I changed some settings in Finder so that the title bar allows me to see whether I'm on "Share" or "Share-1" but that doesn't actually tell me what server I'm on. Any ideas?
I just discovered this after trying frivolously to get 2 shares of the same name from 2 different NAS drives to auto mount ....
limitation with autofs or something ? it should really be painless, as it is in windows. but it seems on the mac side, OS X won't allow shares with the same name to auto mount ....
Similar Messages
-
Map 2 NAS shares with the same name.
Hello,
I can map each of them, but not two at a time.
let´s say server1 has a share called "public" and server2 the same share.
No, rename of the share is not possible.
Any idea?
Pls. help!!!
Thx
HansI just discovered this after trying frivolously to get 2 shares of the same name from 2 different NAS drives to auto mount ....
limitation with autofs or something ? it should really be painless, as it is in windows. but it seems on the mac side, OS X won't allow shares with the same name to auto mount .... -
Network volumes with the same name
Hi everybody.
I recently got a Mac at the office. Yay! But I have an issue with network volumes. There are several machines I need to connect to using smb/cifs. Two of our development servers (Linux servers using smb/cifs) are configured to share each user's home directory. Naturally, my user name on both machines is the same, and therefore so is the name of the smb share. Unfortunately, MacOS X uses the share name as the volume name when it connects. This means I have two volumes with the same name, so I can't make any aliases or have any quick access to them. That's because aliases resolve to the first of them which was mounted, regardless of which of them they were made to originally.
Other people in the office use Windows to access the shares, which poses no problem because in Windows you either give the full path of the share (including host name, which makes the paths distinct), or map a distinct drive to the share. I also mount them from my Linux, but on Linux you explicitly give the mount point, and thus make it distinct.
Any way to make the volumes distinct on the Mac? Otherwise to make aliases resolve correctly in such a setup?there are several ways you can deal with this. first, you can use command line mount_smbfs command. it does let you specify a mount point. you can easily automate the process of creating a mount point and mounting the share using automator so you don't have to do it by hand every time. and you can also add the automator workflow to mount the shares to your login items. then it will happen automatically on every login. lastly, you can use autofs or add entries to /etc/fstab to automount the smb shares on startup. both methods let you specify mount points too. see this link for details
http://rajeev.name/2007/11/23/autofs-goodness-in-apples-leopard-105-part-ii/ -
Two Domain Controllers with the Same Name
So I was working on setting up our new branch office DC. Anyway, the server failed to join the domain the first time because it upgraded the AD schema (This was our first 2012 R2 server) and the schema wasn't synced to all the other remote offices. So I
forced a sync, joined the server as a workstation, then made it a domain controller.
Anyway, after that the server would show itself as a DC in Active Directory, but all the other servers believed it was just a workstation. So, I removed Active Directory from the server (I had to force the removal). I reset the computer account on the local
DCs, then rejoined it to the domain and made it a domain controller again. This time, it appeared as a Domain Controller on the other DCs in the domain.
Now for the issue --- I've now got two objects for the server under AD Sites and Services. One of them doesn't appear to have any AD DS connections. The other has connections, but not all of them work correctly (I get errors when I tell certain connections
to sync).
What should I do to fix this?
I'm still in the setup phase of this, so I can do anything I want with this particular server. I was thinking I would demote from a Domain Controller, remove it from the domain. Then use ntdsutil to cleanup any other metadata that is hanging around in AD (Something
like: https://support.microsoft.com/KB/216498?wa=wsignin1.0 )
Does anyone else have suggestions on what I should do to fix this? --- I'm being overly cautious here as I do not want to mess anything up in Active Directory.
Thanks!
I have not done a metadata cleanup.... I was asking if I should.
The connections on the valid server appeared to be working before I deleted them (Maybe it took a while to replicate ? )
So I went through and deleted all the AD Sites and Services connections from both servers (The broken server had 5 connections to the same DC in another site). Anyway, I ran repadmin /kcc and it regenerated a connection to a server in the remote site, but
it also generated a connection between the two servers with the same name. I ran dcdiag after I did the repadmin /kcc. Anyway it shows:
Directory Server Diagnosis
Performing initial setup:
Trying to find home server...
Home Server = DC-01-CLE
* Identified AD Forest.
Done gathering initial info.
Doing initial required tests
Testing server: Cleveland\DC-01-CLE
Starting test: Connectivity
......................... DC-01-CLE passed test Connectivity
Testing server:
Cleveland\DC-01-CLE\0ACNF:203cf49f-8cb3-4915-b122-be31ddd6e10e
Starting test: Connectivity
[DC-01-CLE\0ACNF:203cf49f-8cb3-4915-b122-be31ddd6e10e]
DsBindWithSpnEx() failed with error 5,
Access is denied..
Got error while checking LDAP and RPC connectivity. Please check your
firewall settings.
DC-01-CLE\0ACNF:203cf49f-8cb3-4915-b122-be31ddd6e10e failed test
Connectivity
Doing primary tests
Testing server: Cleveland\DC-01-CLE
Starting test: Advertising
......................... DC-01-CLE passed test Advertising
Starting test: FrsEvent
......................... DC-01-CLE passed test FrsEvent
Starting test: DFSREvent
......................... DC-01-CLE passed test DFSREvent
Starting test: SysVolCheck
......................... DC-01-CLE passed test SysVolCheck
Starting test: KccEvent
A warning event occurred. EventID: 0x80000785
Time Generated: 12/15/2014 09:58:02
Event String:
The attempt to establish a replication link for the following writable directory partition failed.
A warning event occurred. EventID: 0x80000785
Time Generated: 12/15/2014 09:58:02
Event String:
The attempt to establish a replication link for the following writable directory partition failed.
A warning event occurred. EventID: 0x80000785
Time Generated: 12/15/2014 09:58:02
Event String:
The attempt to establish a replication link for the following writable directory partition failed.
A warning event occurred. EventID: 0x80000785
Time Generated: 12/15/2014 09:58:11
Event String:
The attempt to establish a replication link for the following writable directory partition failed.
A warning event occurred. EventID: 0x80000785
Time Generated: 12/15/2014 09:58:11
Event String:
The attempt to establish a replication link for the following writable directory partition failed.
A warning event occurred. EventID: 0x80000785
Time Generated: 12/15/2014 09:58:11
Event String:
The attempt to establish a replication link for the following writable directory partition failed.
A warning event occurred. EventID: 0x80000785
Time Generated: 12/15/2014 10:03:37
Event String:
The attempt to establish a replication link for the following writable directory partition failed.
A warning event occurred. EventID: 0x80000785
Time Generated: 12/15/2014 10:03:37
Event String:
The attempt to establish a replication link for the following writable directory partition failed.
A warning event occurred. EventID: 0x80000785
Time Generated: 12/15/2014 10:03:37
Event String:
The attempt to establish a replication link for the following writable directory partition failed.
......................... DC-01-CLE passed test KccEvent
Starting test: KnowsOfRoleHolders
......................... DC-01-CLE passed test KnowsOfRoleHolders
Starting test: MachineAccount
......................... DC-01-CLE passed test MachineAccount
Starting test: NCSecDesc
......................... DC-01-CLE passed test NCSecDesc
Starting test: NetLogons
......................... DC-01-CLE passed test NetLogons
Starting test: ObjectsReplicated
......................... DC-01-CLE passed test ObjectsReplicated
Starting test: Replications
......................... DC-01-CLE passed test Replications
Starting test: RidManager
......................... DC-01-CLE passed test RidManager
Starting test: Services
......................... DC-01-CLE passed test Services
Starting test: SystemLog
A warning event occurred. EventID: 0x00001795
Time Generated: 12/15/2014 10:03:37
Event String:
The program lsass.exe, with the assigned process ID 600, could not authenticate locally by using the target name LDAP/a23a13d0-8434-4344-bd6b-24fdf5576329._msdcs.mydomain.local. The target name used is not valid. A target name should refer to one of the local computer names, for example, the DNS host name.
......................... DC-01-CLE passed test SystemLog
Starting test: VerifyReferences
......................... DC-01-CLE passed test VerifyReferences
Testing server:
Cleveland\DC-01-CLE\0ACNF:203cf49f-8cb3-4915-b122-be31ddd6e10e
Skipping all tests, because server
DC-01-CLE\0ACNF:203cf49f-8cb3-4915-b122-be31ddd6e10e is not responding to
directory service requests.
Running partition tests on : DomainDnsZones
Starting test: CheckSDRefDom
......................... DomainDnsZones passed test CheckSDRefDom
Starting test: CrossRefValidation
......................... DomainDnsZones passed test
CrossRefValidation
Running partition tests on : ForestDnsZones
Starting test: CheckSDRefDom
......................... ForestDnsZones passed test CheckSDRefDom
Starting test: CrossRefValidation
......................... ForestDnsZones passed test
CrossRefValidation
Running partition tests on : Schema
Starting test: CheckSDRefDom
......................... Schema passed test CheckSDRefDom
Starting test: CrossRefValidation
......................... Schema passed test CrossRefValidation
Running partition tests on : Configuration
Starting test: CheckSDRefDom
......................... Configuration passed test CheckSDRefDom
Starting test: CrossRefValidation
......................... Configuration passed test CrossRefValidation
Running partition tests on : mydomain
Starting test: CheckSDRefDom
......................... mydomain passed test CheckSDRefDom
Starting test: CrossRefValidation
......................... mydomain passed test CrossRefValidation
Running enterprise tests on : mydomain.local
Starting test: LocatorCheck
......................... mydomain.local passed test LocatorCheck
Starting test: Intersite
Doing intersite inbound replication test on site Cleveland:
......................... mydomain.local passed test Intersite
I've attached a screenshot of AD Sites and Services. Please note I've erased some info for privacy reasons (The site the other DC is in has been erase as well as part of its name).
Picture of AD Sites and Services -
Sharing two folders with the same name
Hi all.
I have two folders with the same name and I would like to be able to share these under different share names. Problem is, this doesn't seem to be possible.
For instance, try doing this in File Sharing under Server Preferences:
* Click +, add /Data/Media
* Edit permissions on "Media" to permit guest access
* Click +, add /Volumes/Drobo/Media
* Edit permissions on "Media" (make sure you click the right one!) to permit guess access.
This appears on the surface to work, but what it has actually done is to delete the share for /Data/Media. If you exit the File Sharing pane and go back into it again, it will be gone.
Server Admin has the ability to rename a share's name from AFP,SMB,FTP,etc. but this doesn't appear to help either -- I tried adding the second media first, renaming its shared name to Media2 over in Server Admin, and then adding the first. Server Preferences just deletes the second one.
Such a basic thing as being able to rename the share from Server Preferences would appear to be enough to get around this, but since Apple didn't make it possible, I have no idea how to proceed.
Does anyone else have this working, and how did you do it?The best way to solve this, would be make sure you use database paraneter GLOBAL_NAME, to change your database from lets say orcl1 to orcl1.mycorpdomain.com, by this you can make sure each database actualy has a different name. Your other database then could be named orcl1.example.com.
When chaning the display name in EM you might face other issues later on when for instance trying to run a restore using EM for one of these databases.
Regards
Rob
http://oemgc.wordpress.com -
Are drives with the same name differentiated?
I'm running OSX 10.6.8 and three external drives on which data is stored (a main drive, A-Data, and two backups). A friend may decide to help me with a project and to do so will have to have the same setup. Naming the drives on the second system is of some concern to me, so I did some research.
The fellow at http://www.cnet.com/au/news/drives-in-os-x-appearing-with-1-appended-to-their-na mes/ states:
If by chance you mount two drives of the same name, because the system can't create two mount points with the same name it appends sequential numbers to new mount points as they are created, and therefore you will see the numbered drive names in the Finder.
Unless I misunderstand what he is talking about, I found that I can have two drives with the same name. I can have two drives named A-Data on my desktop. And therein lies my problem (maybe). The project involves Premiere and several of it's sister programs, all of which require external files (video, audio, stills) to be linked, not stored within the working file.
QUES 1
Are disks with the same name distinguishable by OSX or software?
QUES 2
Assume that
I am working on a project which is saved on A-Data (and all the project links are to files on that particular A-Data)
I then connect another disk with the same name (and with the same folder structure), and backup my files to it.
I then eject the first A-Data and open the project on the second A-Data.
Will the software say that it can't find the linked files (because they are on the ejected disk), or will the project open as normal?Avoid having the same names if at all possible, especially if you use apps that 'link' to the media via the file path.
The file path is set by the order in which the disks mount, which can change across reboots.
To see what is going on by using Disk Utility…
Select the first volume, look at the mount point (a blue link at the bottom of the Window) …
/Volumes/A-Data
Now connect the second volume & select it, look at the mount point…
/Volumes/A-Data 1
Now reboot, power off the first disk, leave the second one on & power disk one back on when the OS has competed booting. They will show the same paths, but each disk is now the 'other' disk - this makes a big mess for Apps like Premiere.
The only way to make this work is to clone every file to be identical on both disks, and never use Premiere (or any app that links to media) when both disks are connected. Otherwise you risk linking to media files on 'A-Data' and 'A-Data 1'.
Premiere must have tools for sharing projects - look at them before using disk names in this way.
You could however clone 'disk 1' to 'disk 2' & then rename it on the second Mac (to make them identical to fix the paths for Premiere), just avoid bringing them to one Mac with the same disk names. It does make more headaches in future though, because your friend will add new media that you need to migrate back to your disk. -
Multiple volumes with the same name
If I look at my /Volumes folder using Finder>Go>Go to Folder.. I see seven different volumes all starting with the same name:
MyName
MyName 1
MyName-1
MyName-2
MyName-3
MyName-4
MyName-5
(as well as my external drive and iMac HD)
MyName has a folder icon, and contains 1 folder called Backup which then has my 1Password.keychain in 7 different versions.
MyName 1 and MyName-1 are aliases and have icons for shared volumes. The only differences I can see are that in "MyName 1", the folders for Backup, Groups and Library are all displayed as aliases, whereas in "MyName-1" they are displayed as standard folders.
MyName-2 through MyName-5 all have folder icons and are empty.
I can understand that I could have a local and remote copy of my iDisk, hence the "MyName 1" and "MyName-1" volumes, but why all the others?
Should I delete any of MyName-2 through -5?
Is any of this slowing down iDisk syncing or searching?
Thanks in advanceI have the same problem
an external firewire drive "232 GB" is gaining an extra, sequentially numbered, mountpoint in /Volumes/ each time it is unplugged and re-plugged.
This is a bit of a pain as Xtorrent is downloading to a folder on this drive and so, whenever it is unplugged, I have to either;
restart all the downloads or,
before launching xtorrent, go into /volumes/ and delete the last mountpoint (so the system automatically adds the one Xtorrent is expecting)
just noticed you're in Sheffield too, maybe it's a local problem....
no idea what's causing it -
RE: multiple named objects with the same name andinterface
David,
First I will start by saying that this can be done by using named anchored
objects and registering them yourself in the name service. There is
documentation on how to do this. And by default you will get most of the
behavior you desire. When you do a lookup in the name service (BindObject
method) it will first look in the local partition and see if there is a
local copy and give you that copy. By anchoring the object and manually
registering it in the name service you are programmatically creating your
own SO without defining it as such in the development environment. BTW in
response to your item number 1. This should be the case there as well. If
your "mobile" object is in the same partition where the service object he is
calling resides, you should get a handle to the local instance of the
service object.
Here is the catch, if you make a bind object call and there is no local copy
you will get a handle to a remote copy but you can not be sure which one!
It end ups as more or less a random selection. Off the top of my head and
without going to the doc, I am pretty sure that when you register an
anchored object you can not limit it's visibility to "User".
Sean
-----Original Message-----
From: [email protected]
[<a href="mailto:[email protected]">mailto:[email protected]]On</a> Behalf Of David Foote
Sent: Monday, June 22, 1998 4:51 PM
To: [email protected]
Subject: multiple named objects with the same name and interface
All,
More than once, I have wished that Forte allowed you to place named
objects with the same name in more than one partition. There are two
situations in which this seems desirable:
1) Objects that are not distributed, but are mobile (passed by value to
remote objects), cannot safely reference a Service Object unless it has
environment visibility, but this forces the overhead of a remote method
call when it might not otherwise be necessary. If it were possible to
place a copy of the same Service Object (with user visibility) in each
partition, the overhead of a remote method call could be avoided. This
would only be useful for a service object whose state could be safely
replicated.
2) My second scenario also involves mobile objects referencing a Service
Object, but this time I would like the behavior of the called Service
Object to differ with the partition from which it is called.
This could be accomplished by placing Service Objects with the same name
and the same interface in each partition, but varying the implementation
with the partition.
Does anyone have any thoughts about why this would be a good thing or a
bad thing?
David N. Foote
Consultant
Get Your Private, Free Email at <a href=
"http://www.hotmail.com">http://www.hotmail.com</a>
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:<a href=
"http://pinehurst.sageit.com/listarchive/">http://pinehurst.sageit.com/listarchive/</a>>
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:<a href=
"http://pinehurst.sageit.com/listarchive/">http://pinehurst.sageit.com/listarchive/</a>>David,
First I will start by saying that this can be done by using named anchored
objects and registering them yourself in the name service. There is
documentation on how to do this. And by default you will get most of the
behavior you desire. When you do a lookup in the name service (BindObject
method) it will first look in the local partition and see if there is a
local copy and give you that copy. By anchoring the object and manually
registering it in the name service you are programmatically creating your
own SO without defining it as such in the development environment. BTW in
response to your item number 1. This should be the case there as well. If
your "mobile" object is in the same partition where the service object he is
calling resides, you should get a handle to the local instance of the
service object.
Here is the catch, if you make a bind object call and there is no local copy
you will get a handle to a remote copy but you can not be sure which one!
It end ups as more or less a random selection. Off the top of my head and
without going to the doc, I am pretty sure that when you register an
anchored object you can not limit it's visibility to "User".
Sean
-----Original Message-----
From: [email protected]
[<a href="mailto:[email protected]">mailto:[email protected]]On</a> Behalf Of David Foote
Sent: Monday, June 22, 1998 4:51 PM
To: [email protected]
Subject: multiple named objects with the same name and interface
All,
More than once, I have wished that Forte allowed you to place named
objects with the same name in more than one partition. There are two
situations in which this seems desirable:
1) Objects that are not distributed, but are mobile (passed by value to
remote objects), cannot safely reference a Service Object unless it has
environment visibility, but this forces the overhead of a remote method
call when it might not otherwise be necessary. If it were possible to
place a copy of the same Service Object (with user visibility) in each
partition, the overhead of a remote method call could be avoided. This
would only be useful for a service object whose state could be safely
replicated.
2) My second scenario also involves mobile objects referencing a Service
Object, but this time I would like the behavior of the called Service
Object to differ with the partition from which it is called.
This could be accomplished by placing Service Objects with the same name
and the same interface in each partition, but varying the implementation
with the partition.
Does anyone have any thoughts about why this would be a good thing or a
bad thing?
David N. Foote
Consultant
Get Your Private, Free Email at <a href=
"http://www.hotmail.com">http://www.hotmail.com</a>
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:<a href=
"http://pinehurst.sageit.com/listarchive/">http://pinehurst.sageit.com/listarchive/</a>>
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:<a href=
"http://pinehurst.sageit.com/listarchive/">http://pinehurst.sageit.com/listarchive/</a>> -
Hiding Fields with the same name
I have multiple fields across a form with the same name
TxtName#1
TxtName#2
TxtName#3..... TxtName#24
They are all share the same name because they should all have the same data.
I want to be able to hide some of these fields, based on the number entered in another field TxtQty.
for example if TxtQty=2 then
TxtName#3-TxtName#24 will be hidden and only TxtName#1 and TxtName#2 will be visible.
Can anyone please help?
Thanks!The easiest way is to rename TxtName #1 and #2 to "TxtName2", and use the following custom Validate script for each:
// Copy this field's value to the TxtName fields
getField("TxtName").value = event.value;
You can then hide all of the TxtName fields with:
getField("TxtName").display = display.hidden; -
Query parameters with the same name and different values
According to HTTP, multiple query or post parameters with the
same name and different values are permitted. They are transfered
over the wire in the following format -
name1=val1&name1=val2&name1=val3
The problem is that I can't see anyway of assigning multiple
parameters with the same name and different values to the request
object of mx.rpc.http.HTTPService. I have tried using the
flash.utils.Dictionary object as it does strict key comparison but
that doesn't work too. I have tried setting an array of values to a
property of the request object but that sends the request to the
server in the following format -
name1=val1,val2,val3
The java servlet engines throw exceptions when they see this.
Any help would be greatly appreciated.If you're not on 8.1.4 move there. 8.1.3 had limitations in the wsrp
release.
wrote:
I have an html select box that contains several values, and multiple
selection is enabled. When my code runs as a remote portlet, the
following is showing up in the soap monitor when I select multiple
values and submit the form:
<urn:interactionParams>
<urn:portletStateChange>cloneBeforeWrite</urn:portletStateChange>
<urn:interactionState>_action=addEmployeesToGroup</urn:interactionState>
<urn:formParameters
name="P62005wlw-select_key:{actionForm.selectedEmployees}OldValue">
<urn:value>true</urn:value>
</urn:formParameters>
<urn:formParameters
name="P62005wlw-select_key:{actionForm.selectedEmployees}">
<urn:value>beatest1</urn:value>
</urn:formParameters>
In this case, I selected beatest1 and beatest2, but only beatest1 comes
through to the remote portlet. Is this a known bug, and, if so, is
there a patch or workaround available?
Thanks in advance,
Andy -
Revision: 889
Author: [email protected]
Date: 2008-03-21 13:08:05 -0700 (Fri, 21 Mar 2008)
Log Message:
Add test case for BLZ-82 where HttpService should return multiple headers with the same name.
Ticket Links:
http://bugs.adobe.com/jira/browse/BLZ-82
Added Paths:
blazeds/trunk/qa/apps/qa-regress/remote/MultipleHeadersTest.jsp
blazeds/trunk/qa/apps/qa-regress/testsuites/mxunit/tests/proxyService/httpservice/MultiHe aderTest.mxmlHi again,
this may be old news to some people, but I just realized we can have the desired benefits I originally listed (encapsulation, reuse, maintainability, security) TODAY by using pipelined functions and using the table() function in Apex report region queries.
So the report query basically becomes, for example (if get_employees is a pipelined function)
select * from table(my_package.get_employees(:p1_deptno))
The only downside compared to a (weakly typed) sys_refcursor is that you have to define the type you are returning in your package spec (or as an SQL type). So it's a bit more coding, but it's still worth it for the other benefits it provides.
I like Apex even better now! :-)
- Morten -
Multiple ResourceBundle definitions with the same name
I created a new web application project with JSF support using Netbeans 5.5Beta. The new project has a simple welcomeJSF page that I did not modify. The only modifications I did is adding a CustomMessages.properties file under com.mywebsite.resourcs and adding the following entry to faces-config.xml:
<application>
<resource-bundle>
<base-name>com.mywebsite.resources.CustomMessages</base-name>
<var>BundleOne</var>
</resource-bundle>
<locale-config>
<default-locale>en</default-locale>
</locale-config>
</application>Now when I right click welcomeJSF and select Run File, the follow error is added to the log file of SJSAS PE9:
Message ID:
WebModule[/WebApplication1]Exception sending context initialized event to listener instance of class com.sun.faces.config.GlassFishConfigureListener javax.faces.FacesException
Complete Message
Can't parse configuration file: jndi:/server/WebApplication1/WEB-INF/faces-config.xml: Error at line 14 column 27: Error at (14, 27: Multiple ResourceBundle definitions with the same name: BundleOne.
at com.sun.faces.config.ConfigureListener.parse(ConfigureListener.java:1751)
at com.sun.faces.config.ConfigureListener.contextInitialized(ConfigureListener.java:524)
at com.sun.faces.config.GlassFishConfigureListener.contextInitialized(GlassFishConfigureListener.java:47)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4236)
at org.apache.catalina.core.StandardContext.start(StandardContext.java:4760)
at com.sun.enterprise.web.WebModule.start(WebModule.java:292)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:833)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:817)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:659)
at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1468)
at com.sun.enterprise.web.WebContainer.loadWebModule(WebContainer.java:1133)
at com.sun.enterprise.server.WebModuleDeployEventListener.moduleDeployed(WebModuleDeployEventListener.java:171)
at com.sun.enterprise.server.WebModuleDeployEventListener.moduleDeployed(WebModuleDeployEventListener.java:275)
at com.sun.enterprise.admin.event.AdminEventMulticaster.invokeModuleDeployEventListener(AdminEventMulticaster.java:954)
at com.sun.enterprise.admin.event.AdminEventMulticaster.handleModuleDeployEvent(AdminEventMulticaster.java:941)
at com.sun.enterprise.admin.event.AdminEventMulticaster.processEvent(AdminEventMulticaster.java:448)
at com.sun.enterprise.admin.event.AdminEventMulticaster.multicastEvent(AdminEventMulticaster.java:160)
at com.sun.enterprise.admin.server.core.DeploymentNotificationHelper.multicastEvent(DeploymentNotificationHelper.java:296)
at com.sun.enterprise.deployment.phasing.DeploymentServiceUtils.multicastEvent(DeploymentServiceUtils.java:203)
at com.sun.enterprise.deployment.phasing.ServerDeploymentTarget.sendStartEvent(ServerDeploymentTarget.java:285)
at com.sun.enterprise.deployment.phasing.ApplicationStartPhase.runPhase(ApplicationStartPhase.java:119)
at com.sun.enterprise.deployment.phasing.DeploymentPhase.executePhase(DeploymentPhase.java:95)
at com.sun.enterprise.deployment.phasing.PEDeploymentService.executePhases(PEDeploymentService.java:871)
at com.sun.enterprise.deployment.phasing.PEDeploymentService.start(PEDeploymentService.java:541)
at com.sun.enterprise.deployment.phasing.PEDeploymentService.start(PEDeploymentService.java:585)
at com.sun.enterprise.admin.mbeans.ApplicationsConfigMBean.start(ApplicationsConfigMBean.java:719)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at com.sun.enterprise.admin.MBeanHelper.invokeOperationInBean(MBeanHelper.java:353)
at com.sun.enterprise.admin.MBeanHelper.invokeOperationInBean(MBeanHelper.java:336)
at com.sun.enterprise.admin.config.BaseConfigMBean.invoke(BaseConfigMBean.java:448)
at com.sun.jmx.mbeanserver.DynamicMetaDataImpl.invoke(DynamicMetaDataImpl.java:213)
at com.sun.jmx.mbeanserver.MetaDataImpl.invoke(MetaDataImpl.java:220)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:815)
at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:784)
at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at com.sun.enterprise.admin.util.proxy.ProxyClass.invoke(ProxyClass.java:77)
at $Proxy1.invoke(Unknown Source)
at com.sun.enterprise.admin.server.core.jmx.SunoneInterceptor.invoke(SunoneInterceptor.java:297)
at com.sun.enterprise.admin.jmx.remote.server.callers.InvokeCaller.call(InvokeCaller.java:56)
at com.sun.enterprise.admin.jmx.remote.server.MBeanServerRequestHandler.handle(MBeanServerRequestHandler.java:142)
at com.sun.enterprise.admin.jmx.remote.server.servlet.RemoteJmxConnectorServlet.processRequest(RemoteJmxConnectorServlet.java:109)
at com.sun.enterprise.admin.jmx.remote.server.servlet.RemoteJmxConnectorServlet.doPost(RemoteJmxConnectorServlet.java:180)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at org.apache.catalina.core.ApplicationFilterChain.servletService(ApplicationFilterChain.java:397)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:278)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:566)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:536)
at org.apache.catalina.core.StandardContextValve.invokeInternal(StandardContextValve.java:240)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:179)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:566)
at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:73)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:182)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:566)
at com.sun.enterprise.web.VirtualServerPipeline.invoke(VirtualServerPipeline.java:120)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:939)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:137)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:566)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:536)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:939)
at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:231)
at com.sun.enterprise.web.connector.grizzly.ProcessorTask.invokeAdapter(ProcessorTask.java:667)
at com.sun.enterprise.web.connector.grizzly.ProcessorTask.processNonBlocked(ProcessorTask.java:574)
at com.sun.enterprise.web.connector.grizzly.ProcessorTask.process(ProcessorTask.java:844)
at com.sun.enterprise.web.connector.grizzly.ReadTask.executeProcessorTask(ReadTask.java:287)
at com.sun.enterprise.web.connector.grizzly.ReadTask.doTask(ReadTask.java:212)
at com.sun.enterprise.web.connector.grizzly.TaskBase.run(TaskBase.java:252)
at com.sun.enterprise.web.connector.grizzly.WorkerThread.run(WorkerThread.java:75)Go ahead and doubt if it makes you feel superior.
For others:
If your pom has this:
<dependency>
<groupId>javax.faces</groupId>
<artifactId>jsf-api</artifactId>
<version>${jsf.version}</version>
</dependency>
<dependency>
<groupId>javax.faces</groupId>
<artifactId>jsf-impl</artifactId>
<version>${jsf.version}</version>
</dependency>
where jsf.version = 1.2, change jsf.version to 1.2_10 -
How do I call methods with the same name from different classes
I have a user class which needs to make calls to methods with the same name but to ojects of a different type.
My User class will create an object of type MyDAO or of type RemoteDAO.
The RemoteDAO class is simply a wrapper class around a MyDAO object type to allow a MyDAO object to be accessed remotely. Note the interface MyInterface which MyDAO must implement cannot throw RemoteExceptions.
Problem is I have ended up with 2 identical User classes which only differ in the type of object they make calls to, method names and functionality are identical.
Is there any way I can get around this problem?
Thanks ... J
My classes are defined as followes
interface MyInterface{
//Does not and CANNOT declare to throw any exceptions
public String sayHello();
class MyDAO implements MyInterface{
public String sayHello(){
return ("Hello from DAO");
interface RemoteDAO extends java.rmi.Remote{
public String sayHello() throws java.rmi.RemoteException;
class RemoteDAOImpl extends UnicastRemoteObject implements RemoteDAO{
MyDAO dao = new MyDAO();
public String sayHello() throws java.rmi.RemoteException{
return dao.sayHello();
class User{
//MyDAO dao = new MyDAO();
//OR
RemoteDAO dao = new RemoteDAO();
public void callDAO(){
try{
System.out.println( dao.sayHello() );
catch( Exception e ){
}>
That's only a good idea if the semantics of sayHello
as defined in MyInterface suggest that a
RemoteException could occur. If not, then you're
designing the interface to suit the way the
implementing classes will be written, which smells.
:-)But in practice you can't make a call which can be handled either remotely or locally without, at some point, dealing with the RemoteException.
Therefore either RemoteException must be part of the interface or (an this is probably more satisfactory) you don't use the remote interface directly, but MyInterface is implemented by a wrapper class which deals with the exception. -
How can I merge folder with the same name so that the content does not replace the other
How can I merge folder with the same name so that the content does not replace the other?
>
That's only a good idea if the semantics of sayHello
as defined in MyInterface suggest that a
RemoteException could occur. If not, then you're
designing the interface to suit the way the
implementing classes will be written, which smells.
:-)But in practice you can't make a call which can be handled either remotely or locally without, at some point, dealing with the RemoteException.
Therefore either RemoteException must be part of the interface or (an this is probably more satisfactory) you don't use the remote interface directly, but MyInterface is implemented by a wrapper class which deals with the exception. -
Use two VIs with the same name, different functionality on a single Project
Hi, I'm working with vision builder al labview, I need to integrate the vision builder migration Vis into my main program, (Is not an option), but I have multiple Vis autogenerated with the same name, how can i integrate them in the same project without dependency problems
Thanks in advanceHi,
Unless you really need tight integration between the Vision code and the rest of your LabVIEW application, or would rather not have to pay for a Vision Builder AI runtime license on the targets on which you want to deploy your final application, I would recommend you look into using the Vision Builder AI LabVIEW API instead of migrating the inspection to LabVIEW.
The API allows to control Vision Builder AI by launching a Vision Builder AI engine, running the inspection and retrieving resulting images and results.
The advantage of the API is that it allows for easier modification and debugging of the Vision Inspection that you designed in Vision Builder, if you need to make changes later. (i.e. all you need to do is open the inspection in Vision Builder AI, modify parameters, add steps etc).
You won't have to change your LabVIEW application, unless you want to output additional results.
When you build and deploy your application, you will need to install Vision Builder AI on the target machine and get a runtime license for it.
Migrating the inspection to LabVIEW is a one way deal. If you need to make changes to the inspection, you will have to migrate the inspection again, or modify the code outside of the Vision Builder AI environment.
As you might have noticed, the code generated is quite complex, and it is recommended to go this route only if you need really tight integration/synchronization between the vision code and the rest or your LabVIEW code, or if you would rather pay for a cheaper Vision runtime license rather than a VBAI runtime license for the deployment machine (in case of deploying multiple systems where cost is a big consideration).
Vision Builder AI API examples are located in this folder:
C:\Program Files (x86)\National Instruments\Vision Builder AI\API Examples\LabVIEW Examples
Hope this helps clarify the use cases and help you make the right decision for your design.
Best regards,
Christophe
Maybe you are looking for
-
Data selection period seems no use in logical DB PCH??
Hi experts,I have a problem when using LDB PCH. There are object selection period fields PCHOBEG and PCHOEND. there are also data selection period fields PCHBEGDA and PCHENDDA . I set the data selection fields PCHBEGDA & PCHENDDA values at Start-of-s
-
Adobe interactive form size double
Hi all, I'm very new to Adobe form area and we have faced an issue that pdf generated from adobe form is much bigger when it is interactive form (fillable flag set to 'X') The size increases more than double when it is an interactive. Just wondering
-
Query on GL_BALANCES Table is very slow
Dear Members, I have prepared a query on GL_BALANCES Table which is as follows: SQL> select (nvl(sum(gb.period_net_dr), 0) - nvl(sum(gb.period_net_cr), 0)) INTO V_AMT from gl_balances gb, gl_code_combinations gcc, fnd_flex_value_hierarchies ffvh wher
-
I don't like itunes...
I was wondering if anyone had a suggestion for another media player. I used Winamp for a long time on my PC, but I don't like iTunes because it copies all my music, makes a double of it in the iTunes library, renames everything... it just makes it a
-
Can't access iTunes - forgot password
Forgot iTunes password, and wanted to change email address (account ID). Updated my information online, but still could not access my iTunes account from the iPhone. It still gave me the old email address prompt. Any suggestions? Thanks!