NCP services on mixed cluster

Hello, I have a test lab for cross upgrade from Netware to OES 2 linux.
I have 2 nodes with Netware 6.5 SP 8
I have removed node 2, and installed OES linux instead.
On node 2 I joined shared iSCSI storage with NSS volumes, installed Novell cluster services.
When i fail over the cluster from Netware node to OES linux node, all resources are moved OK, but there is an issue accessing NCP from Windows XP client:
On netware node:
\\node1\data is accessible from client desktop using NCP
\\cluster\data is accessible from client desktop using NCP
When I fial over cluster to OES linux:
\\node2\data is accessible from client desktop using NCP
\\cluster\data is NOT accessible from client desktop using NCP
Do you have any idea, how to fix this troubble? What to check for?
Thank you.

I have tried to move cluster resources from Netware node to linux node to follow /var/log/messages:
N1 = Netware node IP 90.21
N2 = linux node IP 90.22
N = cluster name IP 90.20
Sep 30 16:36:13 N2 nmbd[5083]: [2011/09/30 16:36:13, 0] nmbd/nmbd_incomingdgrams.crocess_local_master_announce(283)
Sep 30 16:36:13 N2 nmbd[5083]: process_local_master_announce: incorrect name type for destination from IP 192.168.90.21 (was 1d) should be 0x1e. Ignoring packet.
Sep 30 16:36:41 N2 kernel: MM_RemirrorPartition: N.sbd, 1
Sep 30 16:36:41 N2 kernel: NWRAID1: Stop Remirror (start) count=0 state=4 wait=0
Sep 30 16:36:41 N2 kernel: NWRAID1: Stop Remirror (end)
Sep 30 16:36:41 N2 kernel: NWRAID1: Remirror enabled
Sep 30 16:36:41 N2 kernel: device-mapper: ioctl: Target type does not support messages
Sep 30 16:36:41 N2 kernel: device-mapper: ioctl: Target type does not support messages
Sep 30 16:36:41 N2 kernel: device-mapper: ioctl: Target type does not support messages
Sep 30 16:36:41 N2 kernel: device-mapper: ioctl: Target type does not support messages
Sep 30 16:36:41 N2 kernel: CLUSTER-<WARNING>-<10290>: CRM:CRMSendNodeCmd: CRMAllocSendDesc returned NULL descriptor
Sep 30 16:36:42 N2 ncs-resourced: Try LDAP for WINNP_SERVER
Sep 30 16:36:42 N2 ncs-resourced: Try LDAP for Master_IP_Address_Resource
Sep 30 16:36:42 N2 ncs-resourced: Try LDAP for SWAPP_SERVER
Sep 30 16:36:42 N2 ncs-resourced: Try LDAP for AAAP_SERVER
Sep 30 16:36:42 N2 ncs-resourced: Preprocessed script Master_IP_Address_Resource.load
Sep 30 16:36:42 N2 kernel: NET: Registered protocol family 17
Sep 30 16:36:57 N2 ncs-resourced: Preprocessed script WINNP_SERVER.load
Sep 30 16:36:57 N2 ncs-resourced: Preprocessed script SWAPP_SERVER.load
Sep 30 16:36:57 N2 ncs-resourced: Preprocessed script AAAP_SERVER.load
Sep 30 16:36:57 N2 kernel: NSS: zlssConsumer.c[984]: Sending ioctl msg to enable RAID device 253:12
Sep 30 16:36:57 N2 kernel: NWRAID1: Stop Remirror (start) count=0 state=4 wait=0
Sep 30 16:36:57 N2 kernel: NWRAID1: Stop Remirror (end)
Sep 30 16:36:57 N2 kernel: NWRAID1: Remirror enabled
Sep 30 16:36:57 N2 kernel: NSS: zlssConsumer.c[1187]: Enabled 1 RAID objects.
Sep 30 16:36:58 N2 kernel: NSSLOG ==> [MSAP] comnLog.c[201]
Sep 30 16:36:58 N2 kernel: Pool "WINNP" - MSAP activate.
Sep 30 16:36:58 N2 kernel: Server(7b8c574a-eb38-11e0-8b-d7-0050568c0003) Cluster(a668b100-daf0-11e0-ad-4d-0050568c0002)
Sep 30 16:36:58 N2 kernel: NSSLOG ==> [MSAP] comnLog.c[201]
Sep 30 16:36:58 N2 kernel: Pool "WINNP" - Watching pool.
Sep 30 16:36:58 N2 kernel: NSSLOG ==> [Upgrade] zlssUpgrade.c[3224]
Sep 30 16:36:58 N2 kernel: ZLSS upgrade thread started.
Sep 30 16:36:58 N2 kernel: NSSLOG ==> [MSAP] comnLog.c[201]
Sep 30 16:36:58 N2 kernel: Pool "SWAPP" - MSAP activate.
Sep 30 16:36:58 N2 kernel: Server(7b8c574a-eb38-11e0-8b-d7-0050568c0003) Cluster(a668b100-daf0-11e0-ad-4d-0050568c0002)
Sep 30 16:36:58 N2 kernel: NSSLOG ==> [MSAP] comnLog.c[201]
Sep 30 16:36:58 N2 kernel: Pool "SWAPP" - Watching pool.
Sep 30 16:36:58 N2 kernel: NSS: zlssConsumer.c[984]: Sending ioctl msg to enable RAID device 253:10
Sep 30 16:36:58 N2 kernel: NWRAID1: Stop Remirror (start) count=0 state=4 wait=0
Sep 30 16:36:58 N2 kernel: NWRAID1: Stop Remirror (end)
Sep 30 16:36:58 N2 kernel: NWRAID1: Remirror enabled
Sep 30 16:36:58 N2 kernel: NSS: zlssConsumer.c[1187]: Enabled 1 RAID objects.
Sep 30 16:36:58 N2 kernel: NSSLOG ==> [MSAP] comnLog.c[201]
Sep 30 16:36:58 N2 kernel: Pool "AAAP" - MSAP activate.
Sep 30 16:36:58 N2 kernel: Server(7b8c574a-eb38-11e0-8b-d7-0050568c0003) Cluster(a668b100-daf0-11e0-ad-4d-0050568c0002)
Sep 30 16:36:58 N2 kernel: NSSLOG ==> [MSAP] comnLog.c[201]
Sep 30 16:36:58 N2 kernel: Pool "AAAP" - Watching pool.
Sep 30 16:36:58 N2 adminus daemon: Volume state change request for WINN from NCP
Sep 30 16:36:58 N2 adminus daemon: mounting volume WINN with extra options (null)
Sep 30 16:36:58 N2 adminus daemon: Mounted volume on shared device. Mount table (fstab) not updated.
Sep 30 16:36:58 N2 adminus daemon: Volume state change request for SWAPV from NCP
Sep 30 16:36:58 N2 adminus daemon: mounting volume SWAPV with extra options (null)
Sep 30 16:36:58 N2 adminus daemon: Mounted volume on shared device. Mount table (fstab) not updated.
Sep 30 16:36:58 N2 adminus daemon: Volume state change request for AAA from NCP
Sep 30 16:36:58 N2 kernel: Couldn't get FDN from LUM for uid=1000, rc=2
Sep 30 16:36:58 N2 adminus daemon: mounting volume AAA with extra options (null)
Sep 30 16:36:58 N2 adminus daemon: Mounted volume on shared device. Mount table (fstab) not updated.
Sep 30 16:36:58 N2 kernel: Couldn't get FDN from LUM for uid=1000, rc=2
Sep 30 16:36:58 N2 kernel: Couldn't get FDN from LUM for uid=1000, rc=2
Sep 30 16:37:12 N2 smdrd[8450]: Received Join Event for N_SWAPP_SERVER
Sep 30 16:37:14 N2 smdrd[8450]: Target name N_SWAPP_SERVER successfully advertised with SLP
Sep 30 16:37:19 N2 smdrd[8450]: Received Join Event for N_SWAPP_SERVER
Sep 30 16:37:19 N2 smdrd[8450]: Received Join Event for N_WINNP_SERVER
Sep 30 16:37:20 N2 smdrd[8450]: Target name N_WINNP_SERVER successfully advertised with SLP
Sep 30 16:37:20 N2 smdrd[8450]: Received Join Event for N_WINNP_SERVER
Sep 30 16:37:20 N2 smdrd[8450]: Received Join Event for N_AAAP_SERVER
Sep 30 16:37:21 N2 smdrd[8450]: Target name N_AAAP_SERVER successfully advertised with SLP
Sep 30 16:37:21 N2 smdrd[8450]: Received Join Event for N_AAAP_SERVER

Similar Messages

  • Managed Service Accounts for Cluster

    Hi,
    Is it possible to use a MSAs for a 2012 FCI on windows 2008 R2?  Since a MSA can only be associated with one computer, you would have to use multiple MSA accounts, but I've not heard about using service accounts with different names to run a clustered
    SQL service.
    Thanks,
    Sam

    Hi sam_squarewave,
    We can configure the SQL 2012 standalone instance to utilize the new Managed Service Accounts feature in Windows 2008 R2. Usually
    setup the MSA in Active Directory,
    install the MSA on the target server and change the SQL Service account. The managed service account is designed to provide crucial applications such as Exchange Server and IIS with the isolation of their own domain accounts, it should not support
    with SQL 2012 Failover Clustered Instances(FCI). For more information about Managed Service Accounts (MSA) and SQL 2012, you can review the following article.
    http://blogs.msdn.com/b/arvindsh/archive/2014/02/03/managed-service-accounts-msa-and-sql-2012-practical-tips.aspx?PageIndex=5
    In addition, when you configure Windows Failover Clustering for SQL Server (Availability Group or FCI), if you want to other accounts,
     the accounts and permissions required to create and maintain your HADR solution. For guidance configuring the required account permissions for WSFC clusters and clustered services, see Failover Cluster Step-by-Step Guide: Configuring Accounts
    in Active Directory (http://technet.microsoft.com/en-us/library/cc731002(WS.10).aspx).
    There is detail about configure Windows Failover Clustering for SQL Server (Availability Group or FCI) with Limited Security, you can review it.
    http://blogs.msdn.com/b/sqlalwayson/archive/2012/06/05/configure-windows-failover-clustering-for-sql-server-availability-group-or-fci-with-limited-security.aspx
    Regards,
    Sofiya Li
    If you have any feedback on our support, please click here.
    Sofiya Li
    TechNet Community Support

  • Is there a way of passing a mixed cluster of numbers and booleans to teststand

    Hi,
    I have a labview VI that contains an output cluster containing both numeric and boolean results and would like to pass this to Teststand. At the moment I have coverted all my boolean results to  '1/0'  so that I can create a numeric array and can quite easily pass this to Teststand (using multiple numeric limit test). 
    Is there a way to pass mixed results to Teststand and write in the limits (example PASS and GT 5V) or do I have to stick with what I have?
    Chris

    Which test step type to use depends on what you have to analyze - a boolean condition? String? Number(s)? I can't tell you because I don't know what's in your cluster. If you click on the plus sign next to the parameter name "output cluster" you will see all paramters and their types, which are passed from the VI to TestStand.
    You can either create a variable for the whole cluster, or you can assign all or just some values from within the cluster to variables.
    The name of the variable (Locals.xxxxxxx... or FileGlobals.xxxxx...) ist what you type in the value field. You can also choose the variable from the expression browser by clicking on the f(x) button.
    Are you new to TestStand, do you know how to work with variables in TS?
    Maybe the attached picture gives you an example, there I am assigning the values from VI output "VoltageOutputArray" to the TS variable locals.VoltageMeasurement.
    This variable ist used again on the tab "Data Source" as the Data Source Expression.
    Regards,
    gedi
    Attachments:
    stepsettings.jpg ‏89 KB

  • Views on multiple ExtendProxy services in same cluster

    Hi,
    I would like to load-balance 2 sets of clients for the same cluster such that each client set is balanced separately. The motivation is that one set is quite small and the other is much larger so it can happen that the clients in the smaller set all end up connecting to proxies on one host, instead of proxies on different hosts.
    I can think of 3 ways fo doing this:
    1) Create a ExtendProxy service, with 2 sub options:
    a) is it possible to expose to have an ExtendProxy JVM expose multiple proxy services (ExtendProxyService1, ExtendProxyService2).
    b) Running 2 parallel sets of ExtendPorxy JVMs
    2) Moving the load balancing out fo the proxy service (to an F5 BigIp in my case) and setting up 2 pools withthe same members. That avoids touching the cluster at all.
    Any opinions?

    Am I to understand that you have two managed servers, and you have two admin servers, both managing the two managed servers?
    That doesn't make sense. You shouldn't be managing a managed server from more than one admin server. The admin server represents a single domain. You can't have a managed server that's in more than one domain.

  • Can we access two rpds in a single Presentation Service using BI Cluster

    Hi,
    Can we have a single presentation service pointing to multiple rpd's using BI Cluster . Generally why a BI Cluster is used?

    No, BI Cluster is generally used for load balancing and reduce points of failure.. In Clustering same rpd will be loaded in multiple servers and when ever primary server RPD got changed, that will be copied to other machine. So conceptually its same rpd deployed in all the machines involved in Clustering.
    so in clustering if server 1 fails, then server 2 will take all the load and works for the end user. This is about clustering.
    - Madan

  • SQL server Analysis Windows service Alerts on cluster nodes

    Hi,
    I was wondering if others may be experiencing the same issue with the SQL Server MP.
    We are monitoring SQL server Analysis Service using monitor SQL Server Analysis Service Windows Service.
    We had overrided “Alert only if service startup type is automatic”
    option to false.
    It will create the false alerts from the other nodes in the cluster on which service is not running, but since it's in clustered configuration, it should not generate alerts from other node as it's running on another node on the same cluster.
    Please share if anyone has been through this issue and any remediation for this kind of issue.
    Regards,
    Daya Ram

    Hi,
    Here is a similar thread for your reference, in the thread, this issue has been confirmed as bug, and the poster provided a work arround for it, please try the work around:
    Problem with the discovery of clustered SQL Server Analysis Services objects
    http://social.technet.microsoft.com/Forums/en-US/ead946fd-38d1-4627-b60d-a5645d3627fb/problem-with-the-discovery-of-clustered-sql-server-analysis-services-objects?forum=operationsmanagergeneral
    Hope this helps.
    Regards,
    Yan Li
    Regards, Yan Li

  • Restart programmaticaly portal service in a cluster

    Hello,
    Is it possible to restart services programaticaly in a cluster environment from one iView?
    Any help will be appreciate.
    thanks.
    Pino.

    Hi,
    You might be able to achive this by reverse engineering the code of com.sap.portal.runtime.system.console  (application under system admin->support which allows you to restart services)
    See the following folder and decompile the java sources with JAD or similar
    C:\usr\sap\<SID>JC00\j2ee\cluster\server0\src\irj\servlet_jsp\irj\root\WEB-INF\portal\portalapps\com.sap.portal.runtime.system.console
    Regards
    Dagfinn

  • Notification Service in a CLuster

              I have to implement a kind of Notification Service in a WLS Cluster, which informs all registered
              subscribers about changes of a Object (planned to be implemented as an EntityBean).
              I am thinking of JMS. But how can I survive a crash?
              Has anyone any ideas ?
              Thanks
              

    JMS has persistent messaging.
              Rainer Singvogel <[email protected]> wrote in message
              news:39ec6846$[email protected]..
              >
              > I have to implement a kind of Notification Service in a WLS Cluster, which
              informs all registered
              > subscribers about changes of a Object (planned to be implemented as an
              EntityBean).
              >
              > I am thinking of JMS. But how can I survive a crash?
              >
              > Has anyone any ideas ?
              >
              > Thanks
              

  • Error using tuxedo proxy services in a cluster

    We have successfully created proxy services using tuxedo transport in single server ALSB configurations.
    We are now installing ALSB 2.6 in a cluster, and we cannot make the service work. We have configured WTC in all the servers of the cluster.
    When we invoke the service, we get the following error in the log
    ####<Feb 19, 2007 4:32:00 PM CLST> <Error> <TuxedoTransport> <quintay> <ALSB1> <[ACTIVE] ExecuteThread: '7' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1171913520640> <BEA-381600> <Exception in TuxedoTask, java.lang.NullPointerException
    java.lang.NullPointerException
         at com.bea.wli.sb.transports.tuxedo.TuxedoTask.service(TuxedoTask.java:107)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:585)
         at weblogic.wtc.gwt.InboundEJBRequest.run(InboundEJBRequest.java:467)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:181)
    Any help or workaround? Is it possible to use tuxedo transport in a cluster configuration? Other problem we have seen is that the ALSB console only allows us to deploy the service in one local domain. How can I deploy the service in all the wtc servers?
    Thanks in advance
    Mauricio Palma

    Yes, BEA released a patch which solves partially the problem, now we can invoke the proxy service from tuxedo. There are some features missing (for example, we have to configure a separate proxy service in each node of the cluster, with different names, and then import every service from tuxedo, so we cannot do fault tolerance).
    I don't know if the patch is available, we had to create a support case with BEA.

  • OSB Proxy Service deployed in cluster environment FTP issue

    Hi,
    I have a OSB 11g proxy service based on a FTP JCA adapter. Here are the JCA settings:
    <connection-factory location="JNDI_FILE_TOKEN" UIincludeWildcard="FILENAME_TOKEN"/>
      <endpoint-activation portType="Get_ptt" operation="Get">
        <activation-spec className="oracle.tip.adapter.ftp.inbound.FTPActivationSpec">
          <property name="DeleteFile" value="true"/>
          <property name="MinimumAge" value="0"/>
          <property name="PhysicalDirectory" value="FOLDER_TOKEN"/>
          <property name="Recursive" value="false"/>
          <property name="PollingFrequency" value="30"/>
          <property name="FileType" value="ascii"/>
          <property name="IncludeFiles" value="FILENAME_TOKEN"/>  
        </activation-spec>
      </endpoint-activation>
    When I deploy this proxy in a cluster (having 2 osb nodes) and place the very first file on FTP server. Both nodes pick the file simultaneously. But after this when I place files on FTP server only one node is picking file. Could you kindly tell where is the issue ?
    Thanks and Regards

    Hi Mani,
    Do check the configuration of ProxyService-2 that it is enabled or not.
    Do check the input payload of it, that ProxyService-1 is passing the correct payload to ProxyService-2.
    Redeploy the ProxyService-2
    There is some issue in your proxyService-2.
    Regards,
    Richa

  • Disable PartitionListeners on service prior to cluster shutdown

    We're running a Coherence cluster of 70 storage nodes and a further 10 storage-disabled tcp-extend nodes, all running DefaultCacheServer.
    I have a PartitionListener configured on a distributed cache service which sends an email to a distribution list in the event of a 'PARTITION_LOST' event. However, currently when the storage-enabled processes are stopped (by killing the processes), we get spurious emails with partition losses due to the cluster being shut down.
    Is there a tidy, built in way to disable the partition listeners across a cluster before the storage nodes are stopped, and avoid sending these spurious mails? Note that the mails are not sent by the process that is currently stopping, but by one of the other processes when the PartitionEvent is detected.
    I've tried using of Cluster.shutdown() but this does not stop the listeners from being called. I also tried using Cluster.isRunning() in the listener to determine whether to send the error email, but it appears the cluster gets restarted (and isRunning() returns true again) if CacheFactory.getCache() is called after Cluster.shutdown() has been called, so this is not a safe mechanism either.
    My current solution is to register each partition listener as a JMX bean on the management service (Registry) as it's instantiated, and expose a disable() method. I then run a standalone java class which joins the cluster and iterates over those registered beans calling disable on each one, prior to stopping the storage processes.
    The above solution works, but am I missing a built-in way of doing this? I'd rather use an out of the box solution if there's one that fits the bill.
    Thanks in advance,
    Giles Taylor

    Hi Giles,
    I don't think that there is a feature for this particular purpose, as for Coherence the death of a node is a usual event, it does not have the concept of "I am going to kill all nodes, now".
    However, it is very easy to to implement it on your own.
    Just have a special-purpose cache, on which you register an event listener. The event-listener listening for a certain change in the cache (e.g. you put a specific value to a specific key, to indicate that full shutdown will commence now) should then set a static flag which indicates that full cluster shutdown has started and therefore you do not need to send emails on partition loss from now on. Alternatively you can do the same thing with an InvocationService (that invocation service needs to be started on all nodes). Alternatively you can also put that special entry simply into a replicated cache (so checking it can be locally carried out).
    Then in the partition listener you just check the static flag or the replicated cache, according to whichever approach from the previous ones you chose, and you send the mail only if you did not indicate that full shutdown is commencing.
    Then if you want to do a full shutdown, you first carry out the action according to the approach which you chose, and only then start to kill nodes.
    BR,
    Robert

  • Variant to Mixed Cluster

    I'm logging data to a database using the database toolkit. I'm trying to retrive the data and I get a variant. My question is, how can I convert this variant into a cluster? It should go to a cluster of 17 item of mixed types, but I cannot get the Variant to Data vi to work properly.

    Have you created the Variant "type" Cluster and fed that into the Type terminal. The "type", in your case, will be the 17 items (booleans, ints, strings, etc.) that you expect to have converted. It should be a 1:1 match. The vi will then match your Variant data with your predefined "type" and produce the Cluster data output. If you creat a constant for this terminal and then drag in the 17 types, you should be ok.
    Good luck with it, Doug

  • Configuring a HA service on Sun Cluster 2.2

    I am a Product Manager working with customers using Oracle software on Sun Cluster 2.2. My question is, how can I configure a service to bind to logical/virtual address, so as to make it available at the same address after failover? Are there cluster specific steps that I need to take in order to achieve this?

    In the OPS environment there is no use of the HA-Oracle agent, the same instance of the database is running on the different node. The failover is from the client side, because all nodes access the same shared disk space database. The tnsnames.ora files is modifed so that if a transaction fails the client will try the other nodes. The OPS environment also uses the Oracle's unix distributed lock mangage(UDLM), so there is some overhead issues.
    let me know if this the info you needed,
    Heath
    [email protected]

  • Monitoring services on MSMQ cluster with SCOM 2012

    Hello guys,
    I'm looking a way that SCOM 2012 is able to monitor several services installed on MSMQ cluster. I think it's possible that SCOM 2012 can generate alert through the message queuing service associated to these services but this is an indirect monitoring. I
    would want that SCOM generates the alert for the specific service in problem.
    Any suggestion?
    Thank you very much.

    Hi,
    There is a management pack for ​Message Queuing.
    You can download it from http://www.microsoft.com/en-us/download/details.aspx?id=36775
    Hope it help's
    Greetings,
    Roel Knippen
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Web Services Session /Singleton cluster environment

    Hi!
    I am having trouble figuring out how to do sessions in web services.
    I am trying to build an app that uses multiple GUI's in different languages (not java) and we want to implement our business layer with web services.
    I need a session to save user settings, and user info so that I can avoid going to the database and ldap in every call.
    So far my idea is to have a web service that deals with the authentication. If the user is valid, the web service will save in a singleton that has hashmap the user setting and return to the client the hashmap key as a token.
    The client has the responsibility to save the token and to use it in later calls.
    The problem that I have is that a singleton would not work in a cluster environment! And I really don't want to be passing all the user information between calls
    Any ideas how can I solve this???

    Hi Gaurav,
    Yes, the currently running session will be taken care of by the another server node which is up and running.
    Your question: Does this mean that the a session of WebDynpro application will continue to work or the changes are lost and the user needs to login again?
    "The state of the session is maintained in the persistent storage after the last successfully ended request." Which means that for example a user was filling up a form and haven't saved the changes and the server crashes then the data on that form will be lost when the second server node starts processing the request. But if the user had saved the data before the server crashes that state of the session will be stored and the data of the form will be saved and user need not fill it up again. If the Server crashes in between the HTTP request session then the request is assinged to the another server node to generate the response. In none of the case user needs to login again.
    Your question: Fundamentally the question is what WebDynpro uses behind the scenes for maintaining session?
    It uses Java Serialization to serialize the HTTP Sessions
    "Serialization involves saving the current state of an object to a stream, and restoring an equivalent object from that stream. The stream functions as a container for the object. The information stored in the container can later be used to construct an equivalent object containing the same data as the original."
    Edited by: Aishwarya Sharma on Oct 7, 2008 8:07 AM

Maybe you are looking for