Check Points for Jury duty pay Configuration

Hi Experts,
Can any one explain the check points to be considered for Jury duty pay configuration.
With Regards,
Satish

Hi Sophia,
If your question has been answered or whether the problem is solved, please mark the thread as answered.
Thank you,
Liran

Similar Messages

  • Check Points for production server

    Hi
    We are planning to update patch level of production server and Our SRM server is looking into this server. In this regards, I would like to know what are the check points in Production server after Patching up?
    I want to know the main areas which may get affected due to Patch-up.
    Regards

    Hi,
    Pls keep go thru the new features that were introduced thru this patch level. Make note of the Bugs SAP has fixed with this service pack.
    Keeping in view of the features and bug fixed keep track of the areas that you need to keep track of.
    Its is also important to check for below things.
    1. Master Data - Materials, Vendors and Users.
    2. Transactional Data - SC's, PR's,PO's Confirmations, Invoices.
    3. Workflow.
    Regards,
    Satya

  • Single check (single check number) for mulitple page

    While printing check, I have 100line items for same vendor. going to give single check . while printing with standard program and layout,it is printing 3pages and printing 3checks in each page.it is creating first 2 voided check and 3rd one is original.but issue is, it is creating 3 checks in different numbers.if it is 100001,100002,100003 three check, 100001,100002 is voided and 100003 is original check. but for my requirement all the page should have same check number and 2pages voided and 3rd page is original check.100001(voided),100001(voided),100001(original)like this has to print.can any1 help on this ?

    Hi Lak,
    Option 1: You can do this if you have centralized payment system. That's one company paying for the multiple company codes.
    You can then do following customizing:
    1) In fbzp " All company codes" make sure paying company code is same for different sending company codes
    2) Do cross-company customizing "OBYA"
    Option2: If you don't have a centralized payment system, there could be possibilty to address you need, by adjusting the SAP script form for check. From good accounting practice point of view it might now be advisable. Moreover, if you are running check extract for Bank (Positive pay file) keep in mind SAP extracts check info per company code (Standard), so you will need some development. And you loose some reporting functions for check too. Moreover, I am not sure how will you handle the check reversal. It might involve multiple steps, since you can use the standard FCH8.
    Unfortunately the payment grouping key ZGRUP is defined at the company code level
    Thanks
    Ron

  • Host agent check errors for host

    Dear All,
    We have recently installed PI 7.4 system. While configuring in SolMan 7.1. i am facing issue that "Host agent check errors for host "
    Managed System Configuration - >Assign Diagnostics Agents.
    Kindly help me out..
    Regards
    Jay

    Hi Jay,
    Please confirm ownership & permission for directory /usr/sap/hostctrl .For ref please follow SCN link Problem with connecting SAP Host Agent
    After that retry the phase.
    Regards,
    Gaurav

  • Configuring shipping point as Trigger point for posting outbound idoc

    Hello All,
    I have a requirement that on saving an Outbound Delivery(VL02N), an Outbound Idoc should be posted...but the condtion for posting the idoc should be its shipping point. I am not sure how to configure shipping point as the trigger point for the idoc.
    Please suggest.
    Thanks for your co-operation.
    Regards
    Anand

    Hi Anand,
    1. Go to transaction: NACE & select application V2 and click on output types.
    2. change display <->Change mode.
    3. select LAVA and press F6 (to copy Standard output type LAVA to ZLAV).
    4. Give ur Z name for the new custom output type.
    5. change access sequence from 0005 to 0012(shipping point).
    6. Press ENTER and click copy all.
    7. Now goto trasaction VV21, give the created output type name. ENTER
    8. Maintain entries, shipping point, customer#, Medium as '6' ...etc
    Now try VL01N with the given shipping point and check the output ZLAV has been triggered or not?
    Let's see, if we miss any configs still!
    Keep this as reference:
    http://www.erpgenie.com/sapedi/messagecontrol.htm
    Reddy

  • Doubt in configuring entry points for iviews

    Hi, I have followed the steps of the help about configuring entry points for iviews:
    1.-> in the pcd I have created a folder to store iviews for WPC.
    2.->I have gived read permission.
    3.->I have desactivated the hide root folder check por pcd repository.
    4.->I have created a folder in km.
    I am loosing in the step "Create a entry point for the folder that you created in step x".
    How link the pcd folder that store the iviews with the km folder?
    when I try to create a entry point I only can see km folders, How can I choose my pcd folder?
    Thanks.
    Regards.

    Try this:
    Procedure
    1. Choose System Administration &#8594; System Configuration &#8594; Knowledge Management &#8594; Content Management &#8594; User Interface &#8594; Mapping &#8594; Component.
    2. Edit the wpcDragExplorerEntryPoints configuration object.
    Use the following parameters:
    <u>Parameter: Description</u>
    displaymode:Specify <i>select</i>
    maxproviderprio: Highest priority number that standard entry points can have and still be displayed in the Web content browser
    entriesperrow: Number of entry points displayed in each row (default: 5)
    entrypointsprefix: Prefix of the entry point repository: If you specify a value for this parameter, standard KM entry points are also displayed in the Web content browser.
    Default: /entrypoints
    sharedcontent: Semicolon-separated list of paths to be displayed as global entry points for shared content
    Examples of parameter values (for sharedcontent param):
    displaymode=select,maxproviderprio=30,entriesperrow=5,entrypointsprefix=/entrypoints,sharedcontent=/wpccontent/Cross-Site Content;<b>/pcd</b>

  • Check Point VPN-1 Securemote service causes Error 1075: The dependency service does not exist or has been marked for deletion

    Has anyone ever managed to solve this one?
    We have a problem on Windows 7 SP1 32bit systems.
    when we try to start the Check Point VPN-1 Securemote service we get this message:
    Error 1075: The dependency service does not exist or has been marked for deletion
    When we install a clean Windows 7 from an ISO the checkpoint client without any issues
    when we use the same Windows Setup source and install it from SCCM TS - the CheckPoint client doesn't start
    the client we're installing is : CP_SeucRemoteSecureClient_NGX_R30_HFA3
    which is supported for Win7 x86
    other than that there are no special prerequisites for the CheckPoint client. therefore we know the problem is local :/
    we tried to isolate and minimize the cause of the issues by trying to see the differences between a clean installation from ISO and a clean installation from SCCM
    We know for sure the problem is not from SCCM Client, Programs and Features, Updates, Group Policy
    it seems like the configuration is the same but we can't seem to find any online solution about this issue
    any suggestions?
    thanks
    Tamir Levy
    Tamir Levy

    Hi,
    Please check if all the dependencies services are running, if not, please start them:
    DNS Server
    Remote Procedure Call (RPC)
    Netlogon
    Server
    Workstation
    Any error message, please post back.
    You can also check solution in this article:
    http://support.microsoft.com/kb/269375
    Hope these could be helpful.
    Kate Li
    TechNet Community Support

  • Check point in fund management for budget and creation of po

    I have a Budget for a perticular expenses say Rs 100000/-, so the po should not be created for more than Rs 100000/- as per the budget. but the po is been created for more than the budget amount.
    pls. let me know where the check point has to be maintain.
    Thanks
    Santosh Visave
    Moderator: Please, search SDN

    Hi ,
    First of all as a primary step you can do the following :
    1. is the commitment item getting derived in the PO.
    2. Is the commitment item non statistical ( means you have not put the tick on statistical commitment item ) in FMCIA .
    3. check the budget amount via the report FMRP_RW_BUDCON for the budget amount and released amount .
    Hope this would help .
    Cheers ,
    Dewang

  • Performance tuneup for a special DB (disable locking, check-pointing,...)

    Hi,
    I have simple database contains key/value records. The program is a multi-thread application that iterate over records. Each worker thread read a record and after some calculations, replace it. The records are completely independent from each other. The following is my DBController class that share between all threads. Is there any considerations to achieve best performance?, For example I don't want any locking, check-pointing, caching,...overheads. I just want to achieve THE BEST PERFORMANCE to store and retrieve each record independently.
    import gnu.trove.*;
    import java.io.*;
    import com.sleepycat.bind.tuple.*;
    import com.sleepycat.je.*;
    public class DBController {
         private class WikiSimTupleBinding extends TupleBinding<TIntObjectHashMap<TIntDoubleHashMap>> {
              // Write an appropriate object to a TupleOutput (a DatabaseEntry)
              public void objectToEntry(TIntObjectHashMap<TIntDoubleHashMap> object, TupleOutput to) {
                   try {
                        ByteArrayOutputStream bout = new ByteArrayOutputStream();
                        ObjectOutputStream oout = new ObjectOutputStream(bout);
                        oout.writeObject(object);
                        oout.flush();
                        oout.close();
                        bout.close();
                        byte[] data = bout.toByteArray();
                        to.write(data);
                   } catch (IOException e) {
                        e.printStackTrace();
              // Convert a TupleInput(a DatabaseEntry) to an appropriate object
              public TIntObjectHashMap<TIntDoubleHashMap> entryToObject(TupleInput ti) {
                   TIntObjectHashMap<TIntDoubleHashMap> object = null;
                   try {
                        byte[] data = ti.getBufferBytes();
                        object = (TIntObjectHashMap<TIntDoubleHashMap>) new java.io.ObjectInputStream(
                                  new java.io.ByteArrayInputStream(data)).readObject();
                   } catch (Exception e) {
                        e.printStackTrace();
                   return object;
         private Environment myDbEnvironment = null;
         private Database db_R = null;
         private WikiSimTupleBinding myBinding;
         public DBController(File dbEnv) {
              try {
                   // Open the environment. Create it if it does not already exist.
                   EnvironmentConfig envConfig = new EnvironmentConfig();
                   envConfig.setAllowCreate(true);
                   myDbEnvironment = new Environment(dbEnv, envConfig);
                   // Open the databases. Create them if they don't already exist.
                   DatabaseConfig dbConfig = new DatabaseConfig();
                   dbConfig.setAllowCreate(true);
                   db_R = myDbEnvironment.openDatabase(null, "R", dbConfig);
                   // initialize Binding API
                   myBinding = new WikiSimTupleBinding();
              } catch (DatabaseException dbe) {
                   // Exception handling goes here
                   dbe.printStackTrace();
         private final byte[] intToByteArray(int value) {
              return new byte[] { (byte) (value >>> 24), (byte) (value >>> 16), (byte) (value >>> 8), (byte) value };
         public void put(int id, TIntObjectHashMap<TIntDoubleHashMap> repository) {
              try {
                   DatabaseEntry theKey = new DatabaseEntry(intToByteArray(id));
                   DatabaseEntry theData = new DatabaseEntry();
                   myBinding.objectToEntry(repository, theData);
                   db_R.put(null, theKey, theData);
              } catch (Exception dbe) {
                   // Exception handling goes here
                   dbe.printStackTrace();
         public TIntObjectHashMap<TIntDoubleHashMap> get(int id) {
              TIntObjectHashMap<TIntDoubleHashMap> repository = null;
              try {
                   // Create a pair of DatabaseEntry objects. theKey is used to perform the search. theData is used to store the
                   // data returned by the get() operation.
                   DatabaseEntry theKey = new DatabaseEntry(intToByteArray(id));
                   DatabaseEntry theData = new DatabaseEntry();
                   // Perform the get.
                   if (db_R.get(null, theKey, theData, LockMode.DEFAULT) == OperationStatus.SUCCESS) {
                        // Recreate the data repository
                        repository = myBinding.entryToObject(theData);
                   } else {
                        System.out.println("No record found for key '" + id + "'.");
              } catch (Exception e) {
                   // Exception handling goes here
                   e.printStackTrace();
              return repository;
         public void close() {
              // closing the DB
              try {
                   if (db_R != null)
                        db_R.close();
                   if (myDbEnvironment != null)
                        myDbEnvironment.close();
              } catch (DatabaseException dbe) {
                   // Exception handling goes here
                   dbe.printStackTrace();
    }

    If you are writing and you need to recover in a reasonable amount of time after a crash, you need checkpointing.
    If multiple threads may access a record concurrently, you need locking.
    If you need good read performance, you need as large a JE cache as possible.
    If you want to tune performance, the first step is to print the EnvironmentStats (Environment.getStats) periodically, and read the FAQ performance section. Try to find out if your app's performance is limited by CPU or I/O.
    If you are reading records in key order, then you'll get better performance if you also write them in key order.
    I'm not sure why you're using a TupleBinding to do Java object serialization If you want Java serialization, try using a SerialBinding.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Errors with SharePoint Security Token Service: "The revocation function was unable to check revocation for the certificate"

    I'm getting these errors in the eventlog and ULS, "An operation failed because the following certificate has validation errors:\n\nSubject Name: CN=SharePoint Security Token Service, OU=SharePoint, O=Microsoft, C=US\nIssuer Name: CN=SharePoint Root
    Authority, OU=SharePoint, O=Microsoft, C=US\nThumbprint: <STS CERTIFICATE THUMBPRINT>\n\nErrors:\n\n RevocationStatusUnknown: The revocation function was unable to check revocation for the certificate."
    The errors point to the SharePoint Security Token Service as the issue ("The revocation function was unable to check revocation for the certificate") reported back by the Topology service.  This is apparent when executing a search, accessing
    the managed metadata service, issuing SPSite commands in Powershell, or anything that needs to run through the "SharePoint Web Services" site.  I've looked at the certificate assigned to that site and everything appears to be in order. 
    It would seem to me to be either an incorrect endpoint configuration (internally cached perhaps?) or related to security access for the configuration database (in order to validate the certificate root).
    What I’ve tried so far:
    I’ve been all over the certificate settings, both in the server store, and within SharePoint Token Service config.  Both appear to be configured correctly such that the root CAs can be validated.
    Re-entered the passwords for the application pool domain accounts to eliminate these as a potential cause.  I’ve also verified the service accounts reporting the error, do have access to the configuration database.
    Re-provisioned the STS service to see if that might clear out any cached issues and validated everything else according to this
    MS Tech note.
    So far nothing has worked.  Is there anything else I could be looking at that I've missed? (Full eventlog detail below)
    Log Name:      Application
    Source:        Microsoft-SharePoint Products-SharePoint Foundation
    Date:          2/20/2015 11:19:41 AM
    Event ID:      8311
    Task Category: Topology
    Level:         Error
    Keywords:      
    User:          <SP SERVICE ACCOUNT>
    Computer:      <SHAREPOINTSERVER>
    Description:
    An operation failed because the following certificate has validation errors:\n\nSubject Name: CN=SharePoint Security Token Service, OU=SharePoint, O=Microsoft, C=US\nIssuer Name: CN=SharePoint Root Authority, OU=SharePoint, O=Microsoft, C=US\nThumbprint: <STS
    CERT THUMBPRINT>\n\nErrors:\n\n RevocationStatusUnknown: The revocation function was unable to check revocation for the certificate.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Microsoft-SharePoint Products-SharePoint Foundation" Guid="{6FB7E0CD-52E7-47DD-997A-241563931FC2}" />
        <EventID>8311</EventID>
        <Version>14</Version>
        <Level>2</Level>
        <Task>13</Task>
        <Opcode>0</Opcode>
        <Keywords>0x4000000000000000</Keywords>
        <TimeCreated SystemTime="2015-02-20T17:19:41.213852500Z" />
        <EventRecordID>1611121</EventRecordID>
        <Correlation />
        <Execution ProcessID="10212" ThreadID="10328" />
        <Channel>Application</Channel>
        <Computer><SHAREPOINTSERVER></Computer>
        <Security UserID="<SP SERVICE ACCOUNT>" />
      </System>
      <EventData>
        <Data Name="string0">CN=SharePoint Security Token Service, OU=SharePoint, O=Microsoft, C=US</Data>
        <Data Name="string1">CN=SharePoint Root Authority, OU=SharePoint, O=Microsoft, C=US</Data>
        <Data Name="string2"><STS CERT THUMBPRINT></Data>
        <Data Name="string3">RevocationStatusUnknown: The revocation function was unable to check revocation for the certificate.
    </Data>
      </EventData>
    </Event>

    Hi Darren,
    This problem seems to occur when an administrator deletes the local trust relationship of the farm from the Security section of the Central Administration website
    In order to resolve this problem, the local trust relationship has to be created. This can be done by running the following PowerShell commands
    $rootCert = (Get-SPCertificateAuthority).RootCertificate
    New-SPTrustedRootAuthority -Name "localNew" -Certificate $rootCert
    After running the above commands, perform an IISReset on all servers in the farm.
    More information:
    http://support.microsoft.com/kb/2545744
    Best Regards,
    Wendy
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Wendy Li
    TechNet Community Support

  • Default vendor in PO for custom duty

    hi
    this problem is related to IS retail.
    for import procurement we have one condition type ZDUT - (copy of custom duty). we have defined custom official as one vendor (xyz) to whom we pay custom duty.
    while making import PO we select condition type ZDUT and change the vendor to the vendor (xyz) to whom we want to pay this duty but by default system takes material supplier as vendor for this duty.
    we want system to pick vendor (xyz) as a default vendor for this custom duty for all the POs having condition type (ZDUT) not the material supplier.
    how this can be made possible?
    thanks & regards'
    manoj gupta

    Thanks Dev
    I have tried for default vendor setting in info record condition tab but not able to do that. In info record if I try to add condition type ZDUT, system gives below error "condition type ZDUT is not in procedure A M RM0002". I checked in possible entries and condition type ZDUT is not available.
    I have checked vendor master and the vendor schema group IV is assigned to the vendor. This schema group for vendors IV is defined in the system. Also calculation schema ZIMPOR is attached to IV. Also condition type ZDUT is available in ZIMPOR.
    Why system is not allowing me to add condition type ZDUT in info record and why system is calling RM0002 procedure in info record. What do I need to do for this default vendor setting?
    Please guide me.
    Thanks & regards’
    Manoj Gupta

  • Check list for BPM

    Plesae share docs, links which would give the information about troubleshhoting for BPM or the check points..

    Hi,
    Check these links,
    Walkthrough with BPM
    Problem with BPM Configuration
    Business Process Monitoring - Error on Monitoring type
    Re: No object type found for this message
    BPE_ADAPTER error on async BPM

  • NET8의 LOCAL NAMING METHODS 이용관련 CHECK POINTS & TROUBLESHOOTING

    제품 : SQL*NET
    작성날짜 : 2000-12-15
    ===================================================================
    Net8의 Local Naming Methods 이용관련 Check Points & Troubleshooting
    ====================================================================
    Net8 관련 내용중 Local Naming Methods(sqlnet.ora와 tnsnames.ora
    를 이용해서 DB에 Connect하는 방법)에 관한 일반적인 Check Points 및
    Troubleshooing 사항을 정리해보도록 하겠다.
    [Client Connection : Local Naming Methods]
    1. Local Naming method resolves service names by using the local configuration files tnsnames.ora and sqlnet.ora.
    2. One of the benefits of the Local Naming method is that it provides a simple method for resolving service name addresses. It is easy to configure local naming by using a GUI called the Net8 Assistant. When you configure a client machine for local naming by using the Net8 Assistant, a tnsnames.ora file is generated. To configure all the other client machines on the network that need to use the same database services, you can simply copy this file to the client machines.
    3. Another benefit of using the local-naming method is that it resolves service name across networks running different protocols.
    [Local Naming : Configuration]
    1. The local-naming method resolves service names by using the information configured and stored on each individual client. Local naming is most appropriate for simple distributed networks with a small number of services that change infrequently.
    2. Local naming requires the configuration of two local files, tnsnames.ora and sqlnet.ora. The tnsnames.ora file contains the address needed to direct a connection request to the specified listener on the specified node by using the specified database. The sqlnet.ora file stores information about the selected naming method.
    3. Local naming is easy to configure by using the Net8 Assistant. The Net8 Assistant is implemented in Java and is packed with the Java Runtime Environment. You can run the Net8 Assistant on any platform on which Net8 is installed.
    4. When you configure an Oracle8 server to use the local-naming method for service names resolution, the client-side configuration files tnsnames.ora and sqlnet.ora are automatically generated. After the configuration is complete, you copy these two files from the server to the client machines.
    [Client Files : Parameters]
    1. The tnsnames.ora and sqlnet.ora files are generated at the default location ORACLE_HOME\net80\admin.
    2. The first parameter in the tnsnames.ora file specifies the service name and the domain name for the client.
    3. The tnsnames.ora file contains a parameter called DESCRIPTION. This parameter contains information about the connect descriptor for the client.
    4. The ADDRESS parameter in the tnsnames.ora file specifies the network address of the host to which the client is connected. If multiple addresses are specified, use the keyword ADDRESS_LIST before the ADDRESS parameter.
    5. Another parameter in the tnsnames.ora file is the CONNECT_DATA parameter. This parameter specifies the SID of the database to which the client is connected.
    6. The DEFAULT_DOMAIN parameter of the sqlnet.ora file specifies the domain from which the client most often requests names. When this parameter is set, the domain name is automatically appended to the service name in a connect string.
    7. The DIRECTORY_PATH parameter is another key parameter of the sqlnet.ora file. This parameter specifies the names resolution method to be used for the client-server connection.
    8. The sqlnet.ora file also contains a parameter called DEFAULT_ZONE.
    This parameter specifies the region to which the client belongs.
    [Troubleshooting the Client Side]
    These are common client-side problems and the error messages associated with
    each error code.
    ORA-12154: "TNS: could not resolve service name"
    ORA-12198: "TNS: could not find path to destination"
    ORA-12203: "TNS: unable to connect to destination"
    ORA-12533: "TNS: illegal ADDRESS parameters"
    1. The ORA-12154 error occurs when Net8 cannot locate the connect descriptor specified in the tnsnames.ora configuration file. To solve this problem, first verify that a tnsnames.ora file exists and is accessible. Next, verify that multiple copies of the tnsnames.ora file are not present. After checking the existence of the tnsnames.ora file, verify that the service name specified in the connect string is mapped to a connect descriptor in the tnsnames.ora file. Verify also that the file does not contain any syntaz errors. The ORA-12154 error also occur if there is more than one copy of the sqlnet.ora file. Verify that duplicate copies of the sqlnet.ora file are not present.
    2. The ORA-12198 and ORA-12203 errors occur when the client is unable to find the required database. To solve this problem, first verify that you have correctly entered the service name that you want to use. Next verify that the service name parameters in the ADDRESS section of the connect descriptor in the tnsnames.ora file are correctly defined. Then, verify that the tnsnames.ora file is stored in the correct directory. The ORA-12198 and ORA-12203 errors can also occur if the listener on the remote node is not running. Verify that the listener on the remote node has started and is running. If the listener is not running, start the listener by using the Listener Control(LSNRCTL) utility.
    3. The ORA-12533 error occurs if the protocol-specific parameters in the ADDRESS section of the designated connect descriptor in the tnsnames.ora file are incorrect. To solve this problem, use the correct protocol-specific parameters in the ADDRESS section of the connect descriptor.
    4. The ORA-12545 error occurs when the listener on the remote node cannot be contacted. This may happen if the values of the ADDRESS parameter in the tnsnames.ora file and the listener.ora file are incorrect. In this case replace the incorrect values with the correct values in both the files. The ORA-12545 error can also occur if the listener on the remote node is not
    started. To verify whether or not the listener is started, determine its status with the STATUS command of the LSNRCTL utility. If necessary, start the listener on the remote node with the START command.

    제품 : SQL*NET
    작성날짜 : 2000-12-15
    ===================================================================
    Net8의 Local Naming Methods 이용관련 Check Points & Troubleshooting
    ====================================================================
    Net8 관련 내용중 Local Naming Methods(sqlnet.ora와 tnsnames.ora
    를 이용해서 DB에 Connect하는 방법)에 관한 일반적인 Check Points 및
    Troubleshooing 사항을 정리해보도록 하겠다.
    [Client Connection : Local Naming Methods]
    1. Local Naming method resolves service names by using the local configuration files tnsnames.ora and sqlnet.ora.
    2. One of the benefits of the Local Naming method is that it provides a simple method for resolving service name addresses. It is easy to configure local naming by using a GUI called the Net8 Assistant. When you configure a client machine for local naming by using the Net8 Assistant, a tnsnames.ora file is generated. To configure all the other client machines on the network that need to use the same database services, you can simply copy this file to the client machines.
    3. Another benefit of using the local-naming method is that it resolves service name across networks running different protocols.
    [Local Naming : Configuration]
    1. The local-naming method resolves service names by using the information configured and stored on each individual client. Local naming is most appropriate for simple distributed networks with a small number of services that change infrequently.
    2. Local naming requires the configuration of two local files, tnsnames.ora and sqlnet.ora. The tnsnames.ora file contains the address needed to direct a connection request to the specified listener on the specified node by using the specified database. The sqlnet.ora file stores information about the selected naming method.
    3. Local naming is easy to configure by using the Net8 Assistant. The Net8 Assistant is implemented in Java and is packed with the Java Runtime Environment. You can run the Net8 Assistant on any platform on which Net8 is installed.
    4. When you configure an Oracle8 server to use the local-naming method for service names resolution, the client-side configuration files tnsnames.ora and sqlnet.ora are automatically generated. After the configuration is complete, you copy these two files from the server to the client machines.
    [Client Files : Parameters]
    1. The tnsnames.ora and sqlnet.ora files are generated at the default location ORACLE_HOME\net80\admin.
    2. The first parameter in the tnsnames.ora file specifies the service name and the domain name for the client.
    3. The tnsnames.ora file contains a parameter called DESCRIPTION. This parameter contains information about the connect descriptor for the client.
    4. The ADDRESS parameter in the tnsnames.ora file specifies the network address of the host to which the client is connected. If multiple addresses are specified, use the keyword ADDRESS_LIST before the ADDRESS parameter.
    5. Another parameter in the tnsnames.ora file is the CONNECT_DATA parameter. This parameter specifies the SID of the database to which the client is connected.
    6. The DEFAULT_DOMAIN parameter of the sqlnet.ora file specifies the domain from which the client most often requests names. When this parameter is set, the domain name is automatically appended to the service name in a connect string.
    7. The DIRECTORY_PATH parameter is another key parameter of the sqlnet.ora file. This parameter specifies the names resolution method to be used for the client-server connection.
    8. The sqlnet.ora file also contains a parameter called DEFAULT_ZONE.
    This parameter specifies the region to which the client belongs.
    [Troubleshooting the Client Side]
    These are common client-side problems and the error messages associated with
    each error code.
    ORA-12154: "TNS: could not resolve service name"
    ORA-12198: "TNS: could not find path to destination"
    ORA-12203: "TNS: unable to connect to destination"
    ORA-12533: "TNS: illegal ADDRESS parameters"
    1. The ORA-12154 error occurs when Net8 cannot locate the connect descriptor specified in the tnsnames.ora configuration file. To solve this problem, first verify that a tnsnames.ora file exists and is accessible. Next, verify that multiple copies of the tnsnames.ora file are not present. After checking the existence of the tnsnames.ora file, verify that the service name specified in the connect string is mapped to a connect descriptor in the tnsnames.ora file. Verify also that the file does not contain any syntaz errors. The ORA-12154 error also occur if there is more than one copy of the sqlnet.ora file. Verify that duplicate copies of the sqlnet.ora file are not present.
    2. The ORA-12198 and ORA-12203 errors occur when the client is unable to find the required database. To solve this problem, first verify that you have correctly entered the service name that you want to use. Next verify that the service name parameters in the ADDRESS section of the connect descriptor in the tnsnames.ora file are correctly defined. Then, verify that the tnsnames.ora file is stored in the correct directory. The ORA-12198 and ORA-12203 errors can also occur if the listener on the remote node is not running. Verify that the listener on the remote node has started and is running. If the listener is not running, start the listener by using the Listener Control(LSNRCTL) utility.
    3. The ORA-12533 error occurs if the protocol-specific parameters in the ADDRESS section of the designated connect descriptor in the tnsnames.ora file are incorrect. To solve this problem, use the correct protocol-specific parameters in the ADDRESS section of the connect descriptor.
    4. The ORA-12545 error occurs when the listener on the remote node cannot be contacted. This may happen if the values of the ADDRESS parameter in the tnsnames.ora file and the listener.ora file are incorrect. In this case replace the incorrect values with the correct values in both the files. The ORA-12545 error can also occur if the listener on the remote node is not
    started. To verify whether or not the listener is started, determine its status with the STATUS command of the LSNRCTL utility. If necessary, start the listener on the remote node with the START command.

  • Problem with option: enable this distribution point for prestaged content

    Hi, for past few days I have been focused on application deployment with SCCM 2012 R2 (I am learning SCCM 2012 R2 reading books, watching video trainings . . .). Having followed allong book Mastering System Center 2012 R2 Configuration Manager I also did
    deployment of Microsoft Office 2013, Adobe Reader 11, Foxit Reader 4.2 and Notepad ++ to test collection containing one Windows 7 computer. In one of examples prestaged content file is created for Foxit Reader application. In order to do so, if I am
    right, option Enable this distribution point for prestaged content
    has to be checked on distribution point and I did that.
    Problem is that I noticed deployment of any application to test collection failed every time - system waits for prestaged content file if this option is selected. Then I have to delete deployment of given application, application itself, deselect option
    Enable this distribution point for prestaged content and create app, distribute it to DP and deploy to test collection - then everything works well.
    To make things worse, having deselect option Enable this distribution point for prestaged content (I did it yesterday and I checked that several times) this morning I checked again and this option is selected again. What might be the cause
    of this behaviour? I expected this option to be deselected - this is weird to say at least.
     

    When you enable the option "Enable this distribution
    point for prestaged content" on a DP, it causes newly created applications/packages to default to "Manually copy the content in this package to the distribution point"... This would then cause the deployments on the clients to remain
    at 'waiting for content' - unless if you manually prestage the content, as this option requires it.
    If you enable the option "Enable this distribution
    point for prestaged content" on the DP, but want to automatically have CM distribute the content, then configure the app/package to do so,
    before distributing the content: Distribution Settings tab > Prestaged distribution point settings > Automatically download content when packages are assign to distribution points.

  • Check point query information

    Hello Team,
    one of my production server i am continually getting this error:
    [CHECK POINT FREE SPACE QUERY TIME OUT]
    this information we are getting through third party tools and IT team raise ticket to DBA team,
    Note:we checked sql server error log  due to this issue but there is no information for
    this issue each and every think works fine.
    how we can resolve this issue ,this alert getting daily
    4 or 5 times.

    Hello Team,
    one of my production server i am continually getting this error:
    [CHECK POINT FREE SPACE QUERY TIME OUT]
    this information we are getting through third party tools and IT team raise ticket to DBA team,
    Note:we checked sql server error log  due to this issue but there is no information for
    this issue each and every think works fine.
    how we can resolve this issue ,this alert getting daily
    4 or 5 times.
    From where did you get this error, as you said you have went through the SQL errorlog & no issues reported, then it is not related to the SQL database server, Also as you said it is 3rd party tool, then suggesting to check what that 3rd party tool it is
    for & where it configured & for what(like monitoring,application level query etc) & what inside described it.
    Thanks, Rama Udaya.K (http://rama38udaya.wordpress.com) ---------------------------------------- Please remember to mark the replies as answers if they help and UN-mark them if they provide no help,Vote if they gives you information.

Maybe you are looking for

  • Viewing pdf's with adobe Acrobat

    I have a Mac running OS 10.3 and it can open PDF documents in the apple preview application, but it can not open PDF images in the same application. Is there a way to disable preview as the default PDF viewer so that adobe can be the default viewer?

  • How to know whether a particular query is using the aggregates

    hi all.... im very new to this group so plz help me out.....anyway hi to all .... There are many aggreegates are there but how to know which query is using perticular a?gregates

  • RSPO_RID_SPOOLREQ_DISP - Different Formats

    Hi , I'm using the function below to look at spools in a program I have written. But, if you have spools of different formats, it does not display all of them just the first one from the Internal table passed in. Id there a way around this using a di

  • Firefox opens file:// links instead of application

    I set Windows explorer.exe to open file:// links, such as folders. Instead, Firefox opens them in a tab. How can I force them to be opened by the external application?

  • URL eror "you may need to install"

    I am trying to open a windows media URL to play a streaming video. I have installed flip for mac but I get the error "you may need to install additional.." and then an error. I have 10.6.3 OSX and have purchased a Quicktime Pro License but been unabl