Best Practice for Production environment

Hello everyone,
can someone share the best practice for a production environment? or is there a SAP standard best practice to follow in a Production landscape?
i understand there are Best practices available for Implementation , Migration and upgrade. But, i was unable to find one for productive landscape
thanks.

Hi Siva,
What best practise are you looking for ? If you can be specific on your question we could provide appropriate response.
From my basis experience some of the best practices.
1) Productive landscape should have high availability to business. For this you may setup DR or HA or both.
2) It should have backup configured for which restore has been already tested
3) It should have all the monitoring setup viz application, OS and DB
4) Productive client should not be modifiable
5) Users in Production landscape should have appropriate authorization based on SOD. There should not be any SOD conflicts
6) Transport to Production should be highly controlled. Any transport to Production should be moved only with appropriate Change Board approvals.
7) Relevant Database and OS security parameters should be tested before golive and enabled
8) Pre-Golive , Post Golive should have been performed on Production system
9) EWA should be configured atleast for Production system
10) Production system availability using DR should have been tested
Hope this helps.
Regards,
Deepak Kori

Similar Messages

  • Best Practice for Production IDM setup

    Hi, what is the best practice for setting up prodcution IDM:
    1. Connect IDM prod to ECC DEV,QA and Prod or
    2. Connect IDM prod to ECC prod only and Connect IDM dev to ECC Dev and QA.
    Please also specify pros and cons for both options if possible.
    Thanks in advance,
    Farhan

    We run our IDM installation as per your option 2 (Prod and non-prod on separate instances)
    We use HCM for the source of truth in production and have a strict policy regarding not allowing non HCM based user accounts. HCM creates the SU01 record and details are downloaded to IDM through the LDAP extract. Access is provision based on Roles attached to the HCM Position in IDM. In Dev/test/uat we create user logins in IDM and push the details out.
    Our thinking was that we definitely needed a testing environment for development and patch testing, and it needs to be separate to production. It was also ideal to use this second environment for dev/test/uat since we are in the middle of a major SAP project rollout and are creating hundreds of test and training users with various roles and prefer to keep this out of a production instance.
    Lately we also created a sandpit environment since I found that I could not do destructive testing or development in the dev/test/uat instance because we were becoming reliant on this environment being available. Almost a second production instance - since we also set the policy that all changes are made through IDM and no direct SU01 changes are permitted.
    Have a close look at your usage requirements before deciding which structure works best for you.

  • SOA OSB Deployment best practices in Production environment.

    Hi All
    I just wanted to know the best practices followed in production environment for deploying OSB and SOA Code. As you are aware that both require libraries from either (Jdev or SOA Suite) and (OEPE and OSB)? Should one rip the libraries and package them with the ANT scripts (I am not sure but SOA would require its internal ANT scripts and lot of libraries to be bundled, OSB requires only a few OEPE and OSB libraries) or we simply use the below:
    1) Use the production run time (SOA Server and OSB Server) to build and deploy the code. OEPE would not be present here, so we would just have to deploy the already created sbconfig.jar (We would build this in a local environment where OEPE and OSB would be installed). The code is checked out from a repository and transferred to this linux machine.
    2) Use a windows machine (which has access to prod environment) and have Jdeveloper, OEPE and OSB installed to build\deploy the code to production server. The code is checked out from a repository.
    Please let us know your personal experiences with the deployment in PROD. Thanks a lot!

    There are two approaches for deployment of OSB and SOA code.
    1. Use a machine specifically for build and deployment which will have access to all production environments (where deployment needs to be done). Install all the required software (oepe, OSB etc..) and use remote deployment for deploying the code.
    2. Bundle all the build and deployment related libraries and ship them as a deployment package on the target server and proceed with the deployment.
    Most commonly followed approach is approach#1.
    Regards
    Vivek

  • Best Practice for MUD Environment

    Hi Guys,
    I initially thought of using Merge Repository as an option to MUD Environment.
    But I found that while merging repositories, You have to either accept changes from Modified or Current Repository.
    What if I have 2 developers working parallel in a single Presentation Folder?
    Then I though Project based MUD Implementation will be the only option but in that Developers will have power to keep or remove changes from other developer.
    Now I am confused how I can get multiple users develop single RPD.
    Please let me know what's the best practice used?
    Thanks
    Saurabh

    Below some explanations. Follow the links, if you want more information. Personnaly, I prefer to set up a MUD environment.
    Software Configuration Management
    By default, the Oracle BI repository development environment is not set up for multiple users. However, online editing makes it possible for multiple developers to work simultaneously, though this may not be an efficient methodology, and can result in conflicts, because developers can potentially overwrite each other's work.
    To develop a repository in a concurrent version environment, you have several choices :
    * first of all, you can send the repository to the developper, keep a copy, retrieve it after modification and perform an [Merge Repository|http://gerardnico.com/wiki/dat/obiee/obiee_repository_merge|Merge Repository]
    * second, you can set up a [multiuser environment (MUD)|http://gerardnico.com/wiki/dat/analytic/obiee/multiuser_environment] which use the notion of Projects to split the work area. It would permit developers to modify a repository simultaneously and then check in changes.
    The import option which permit to import a subset of a repository to an other repository, work but is deprecated.
    Success
    Nico

  • Best practices for setting environment based static variables?

    I have a set of static string variables that hold the url location of modules in a project. These locations change depending on whether I'm building for development, staging or production.
    What's the best way to set static variables in this way?

    I don't know if this is best practice, but here's the solution I've come up with.
    The root domain is accessible within the swf via a node on a loaded xml file. So I created a simple method that sets a url variable based on that domain node.
    The domain-based url variable is then used within the static string variables that define the location of the modules.
    Simplified like so:
    var domain:String = xml.node.value;
    static var bucketLocation:String = getLocation()
    static var moduleLocation:String = bucketLocation + "modulename.swf";
    function getLocation():String
         var loc:String
         switch (domain) {
              case stagingUrl:
                  loc = "pathToAmazonStagingBucket";
                   break;
              case productionUrl:
                   loc = "pathToAmazonProductionBucket";
                   break;

  • Best practices for defining Environment Variables/User Accounts in Linux

    Hello,
    After reading throught the Quick Install guide for 10gR2 on x86_64 Linux, I see that it is not recommended to define ANY variables in .bash_profile.
    I'm hoping to get a Best practices approach for defining environment variables - right now we use the oracle linux account for administration including sql*plus. So, where should the myriad variables be defined? Is it important enough to create a user account in linux to support best practices?
    What variables, exactly, should be defined? It seems that LD_LIBRARY_PATH is no longer being used?
    Thanks in advance
    Doug

    Something that I've done for years on unix/linux boxes is to create a seperate environment variable setup file for each instance on the box. This would include things like ORACLE_HOME, ORACLE_SID, etc. Then I would create an alias in my .bash_profile that would execute this script. As an example, I would create a orcl.env file that would hold all of the environment variables for this instance. Then in my .bash_profile I would create a line like the following:
    alias orcl=". $HOME/orcl.env"
    Then from anywhere you could type orcl and you would set your environment to connect to that database.
    Also, if you are using 10g, something else that is really nice if you are using sqlplus, and you connect to different databases without starting a new sqlplus session is to set a parameter in your $ORACLE_HOME/sqlplus/admin/glogin.sql file:
    set sqlprompt "_user 'at' _connect_identifier >"
    This will automatically change your command prompt to look like this:
    RALPH at ORCL >
    if you connect as GEORGE, your prompt will immediately change to :
    GEORGE at ORCL >
    This way you can always know who and where you are connected to.
    Good luck!

  • Best practice for Error logging and alert emails

    I have SQL Server 2012 SSIS. I have Excel files that are imported with Exel Source and OLE DB Destination. Scheduled Jobs runs every night SSIS packages.
    I'm looking for advice that what is best practice for production environment.Requirements are followings:
    1) If error occurs with tasks, email is sent to admin
    2) If error occurs with tasks, we have log in flat file or DB
    Kenny_I

    Are you asking about difference b/w using standard logging and event handlers? I prefer latter as using standard logging will not always data in the way in which we desire. So we've developed a framework to add necessary functionalities inside event handlers
    and log required data in the required format to a set of tables that we maintain internally.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Best Practices for zVM/SLES10/zDB2 environment for dialog instances.

    Hi,  I am a zSeries system programmer who has just completed an IBM led Proof of Concept which demonstrated the viability of running SAP instances on SUSE SLES10 Linux booted in zVM guests and accessing zDB2 data via hipersockets. Before we build a Linux infrastructure using the 62 IFLs we just procured, we are wondering if any best practices for this environment have been developed as an OSS note or something else by SAP.    Below you will find an email which was sent and responded to by IBM and Novell on these topics...
    "As you may know, Home Depot has embarked on an IBM led proof of concept using SUSE SLES10 running in zVM guests on IBM zSeries hardware to host SAP server instances.  The Home Depot IT organization is currently in the midst of a large scale push to modernize our merchandising and people systems on SAP platforms.  The zVM/SUSE/SAP POC is part of that effort, as is a parallel POC of an Intel Blade/Red Hat/SAP platform.  For our production financial systems we now use a pSeries/AIX/SAP platform.
          So far in the zVM/SUSE/SAP POC, we have been able to create four zVM LPARS on IBM z9 hardware, create twelve zVM guests on those LPARS, boot SLES10 in those guests, install and run SAP instances in those guests using hipersockets for access to our DB2 SAP databases running on zOS, and direct user workloads to the SAP instances with good results.  We have also successfully developed cloning scripts that have made it possible to create new SLES10 instances, configured and ready for SAP installs, in about 10 seconds using FLASHCOPY and IBM DASD.
          I am writing in the hope that you can direct us to technical resources at IBM/Novell/SAP who may be able to field a few questions that have arisen.  In our discussions about optimization of the zVM/SUSE/SAP platform, we wondered if any wisdom about the appropriateness of and support for using zVM capabilities to virtualize SAP has ever been developed or any best practices drafted.  Attached you will find an IBM Redbook and a PowerPoint presentation which describes the use of the zVM discontiguous shared segments and the zVM named saved system features for the sharing of reentrant code and other  elements of Linux and its applications, thereby conserving storage and disk resources allocated to guest machines.   The specific question of the hour is, can any SAP code be handled similarly?  Have specific SAP elements eligible for this treatment been identified? 
          I've searched the SUSE Knowledgebase for articles on this topic to no avail.  Any similar techniques that might help us reduce the total cost of ownership of a zVM/SUSE/SAP platform as we compare it to Intel Blade/Red Hat/SAP and pSeries/AIX/SAP platforms are of great interest as we approach the end of our POC.  Can you help?
          Greg McKelvey is a Client I/T Architect at IBM.  He found the attached IBM documents and could give a fuller account of our POC.  Pat Downs, IBM zSeries IT Architect, has also worked to guide our POC. Akshay Rao, IBM Systems IT Specialist - Linux | Virtualization | SOA, is acting as project manager for the POC.  Jim Hawkins is the Home Depot Architect directing the POC.  I've CC:ed their email addresses.  I am sure they would be pleased to hear from you if there are the likely questions about what the heck I am asking about here.  And while writing, I thought of yet another question that I hoping somebody at SAP might weigh in on; are there any performance or operational benefits to using Linux LVM to apportion disk to filesystems vs. using zVM to create appropriately sized minidisks for filesystems without LVM getting involved?"
    As you can see, implementation questions need to be resolved.  We have heard from Novell that the SLES10 Kernel and other SUSE artifacts can reside in memory and be shared by multiple operating system images.  Does SAP support this configuration?  Also, has SAP identified SAP components which are eligible for similar treatment?  We would like to make sure that any decisions we make about the SAP platforms we are building will be supportable.  Any help you can provide will be greatly appreciated.  I will supply the documents referenced above if they are not known to any answerer.  Thanks,  Al Brasher 770-433-8211 x11895 [email protected]

    Hello AL ,
    first, let me welcome you on board,  I am sure you won't be disapointed with your choice to run SAP on ZOS.
    as for your questions,
    it wan't easy to find them in this long post , so i suggest you take the time to write a short summary that contains a very short list of questions.
    as for answers.
    here are a few usefull sources of information :
    1. the sap on db2 for Z/os sdn page :
    SAP on DB2 for z/OS
    in it you can find 2 relevant docs :
    a. best practices for ...
    b. database administration for db2 udb for z/os .
    this second publication is excellent , apart from db2 specific info , it contains information on all the components of the sap on db2 for z/os like zlinux,z/vm and so on ...
    2. I can see that you are already familiar with the ibm redbooks , but it seems that you haven't taken the time to get the most out of that resource.
    from you post it is clear that you have found one usefull publication , but I know there are several.
    3. a few months ago I wrote a short post on a similar subject ,
    I'm sure its not exactly what you are looking for at this moment , but its a good start , and with some patience you may be able to get some answers.
    here's a link
    http://blogs.ittoolbox.com/sap/db2/archives/index-of-free-documentation-on-sap-db2-administration-14245
    good luck.
    omer brandis.

  • Best practice for a deplomyent (EAR containing WAR/EJB) in a productive environment

    Hi there,
    I'm looking for some hints regarding to the best practice deployment in a productive
    environment (currently we are not using a WLS-cluster);
    We are using ANT for buildung, packaging and (dynamic) deployment (via weblogic.Deployer)
    on the development environment and this works fine (in the meantime);
    For my point of view, I would like to prefere this kind of Deploment not only
    for the development, also for the productive system.
    But I found some hints in some books, and this guys prefere the static deployment
    for the p-system.
    My question now:
    Could anybody provide me with some links to some whitepapers regarding best practice
    for a deployment into a p-system ??
    What is your experiance with the new two-phase-deploment coming up with WLS 7.0
    Is it really a good idea to use the static deployment (what is the advantage of
    this kind of deployment ???
    THX in advanced
    -Martin

    Hi Siva,
    What best practise are you looking for ? If you can be specific on your question we could provide appropriate response.
    From my basis experience some of the best practices.
    1) Productive landscape should have high availability to business. For this you may setup DR or HA or both.
    2) It should have backup configured for which restore has been already tested
    3) It should have all the monitoring setup viz application, OS and DB
    4) Productive client should not be modifiable
    5) Users in Production landscape should have appropriate authorization based on SOD. There should not be any SOD conflicts
    6) Transport to Production should be highly controlled. Any transport to Production should be moved only with appropriate Change Board approvals.
    7) Relevant Database and OS security parameters should be tested before golive and enabled
    8) Pre-Golive , Post Golive should have been performed on Production system
    9) EWA should be configured atleast for Production system
    10) Production system availability using DR should have been tested
    Hope this helps.
    Regards,
    Deepak Kori

  • Best practice for the test environment  &  DBA plan Activities    Documents

    Dears,,
    In our company, we made sizing for hardware.
    we have Three environments ( Test/Development , Training , Production ).
    But, the test environment servers less than Production environment servers.
    My question is:
    How to make the best practice for the test environment?
    ( Is there any recommendations from Oracle related to this , any PDF files help me ............ )
    Also please , Can I have a detail document regarding the DBA plan activities?
    I appreciate your help and advise
    Thanks
    Edited by: user4520487 on Mar 3, 2009 11:08 PM

    Follow your build document for the same steps you used to build production.
    You should know where all your code is. You can use the deployment manager to export your configurations. Export customized files from MDS. Just follow the process again, and you will have a clean instance not containing production data.
    It only takes a lot of time if your client is lacking documentation or if you re not familiar with all the parts of the environment. What's 2-3 hours compared to all the issues you will run into if you copy databases or import/export schemas?
    -Kevin

  • Best practices for development / production environments

    Our current scenario:
    We have one production database server containing the APEX development install, plus all production data.
    We have one development server that is cloned nightly (via RMAN duplicate) from production. It therefore also contains a full APEX development environment, and all our production data, albeit 1 day old.
    Our desired scenario:
    We want to convert the production database to a runtime only environment.
    We want to be able to develop in the test environment, but since this is an RMAN duplicated database, every night the runtime APEX will overlay it, and the production versions of the apps will overlay. However, we still want to have up to date data against which to develop.
    Questions: What is best practice for this sort of thing? We've considered a couple options:
    1.) Find a way to clone the database (RMAN or something else), that will leave the existing APEX environment intact? If such is doable, we can modify our nightly refresh procedure to refresh the data, but not APEX.
    2.) Move apex (in both prod and dev environments) to a separate database containing only APEX, and use DBLINKS to point to the data in both cases. The nightly refresh would only refresh the data and the APEX database would be unaffected. This would require rewriting all apps to use DBLINKS though, as well as requiring a change to the code when moving to production (i.e. modify the DBLINK to the production value)
    3.) Require the developers to export their apps when done for the day, and reimport the following morning. This would leave the RMAN duplication process unchanged, and would add a manual step which the developers loath.
    We basically have two mutually exclusive requirements - refresh the database nightly for the sake of fresh data, but don't refresh the database ever for the sake of the APEX environment.
    Again, any suggestions on best practices would be helpful.
    Thanks,
    Bill Johnson

    Bill,
    To clarify, you do have the ability to export/import, happily, at the application level. The issue is that if you have an application that consist of more than a couple of pages, you will find yourself in a situation where changes to page 3 are tested and ready but, changes to pages 2, 5 and 6 are still in various stages of development. You will need to get the change for page 5 in to resolve a critical production issue. How do you do this without sending pages 2, 5 and 6 in their current state if you have to move the application all at once??? The issue is that you absolutely are going to need to version control at the page level, not at the application level.
    Moreover, the only supported way of exporting items is via the GUI. While practically everyone doing serious APEX development has gone on to either PL/SQL or Utility hacks, Oracle still will not release a supported method for doing this. I have no idea why this would be...maybe one of the developers would care to comment on the matter. Obviously, if you want to automate, you will have to accept this caveat.
    As to which version of the backend source control tool you use, the short answer is that it really doesn't matter. As far as the VC system is concerned, you APEX exports are simply files. Some versioning systems allow promotion of code through various SDLC stages. I am not sure about GIT in particular but, if it doesn't support this directly, you could always mimic the behavior with multiple repositories. Taht is, create a development repository into which you automatically update via exports every night. Whenever particular changes are promoted to production, you can at that time export form the development repository and into the production. You could, of course, create as many of these "stops" as necessary to mirror your shop's SDLC stages, e.g. dev, qa, intergation, staging, production etc.
    -Joe
    Edited by: Joe Upshaw on Feb 5, 2013 10:31 AM

  • Best practices for promotions to production

    My company's production environment is way to lose, I need to implement some controls. My analysts keep fouling up the production objects. Does anyone know of best practices for an organization rolling out production changes?
    thanks

    Yes you can. With SOA 11g, you can create deployment profiles to change poperties during deployment. You can also build your own deployment mechanism, as I did.
    http://orasoa.blogspot.com/2009/04/new-oracle-soa-build-server-osbs.html
    Marc

  • Best Practice for FlexConnect Wireless roaming in MediaNet environment?

    Hello!
    Current Cisco best practice recommendations for enterprise MediaNet design, specify that VLANs be local to a switch / switch stack (i.e., to limit the scope of spanning-tree). 
    In the wireless world, this causes problems if you want users while roaming to keep real-time applications up and running.  Every time they connect to a new AP on a different VLAN, then they will need to get a new IP address, which interrupts real-time apps. 
    So...best practice for LAN users causes real problems for wireless users.
    I thought I'd post here in case there's a best practice for implementing wireless roaming in a routed environment that we might have missed so far!
    We have a failover pair of FlexConnect 7510s, btw, configured for local switching for Internal users, and central switching with an anchor controller on the DMZ for Guest users.
    Thanks,
    Deb

    Thanks for your replies, Stephen and JSnyder.
    The situation here is that the original design engineer is no longer here, and the original design was not MediaNet-friendly, in that it had a very few /20 subnets bridged over entire large sites. 
    These several large sites (with a few hundred wireless users per site), are connected to an HQ location (where the 7510s in failover mode are installed) via 1G ethernet hand-offs (MPLS at the WAN provider).  The 7510s are new, and are replacing older contollers at the HQ location. 
    The internal employee wireless users use resources both local to their site, as well as centralized resources.  There are at least as many Guest wireless users per site as there are internal employee users, and the service to them consists of Internet traffic only.  (When moved to the 7510s, their traffic will continue to be centrally switched and carried to an anchor controller in the DMZ.) 
    (1) So, going local mode seems impractical due to the sheer number of users whose traffic bound for their local site would be traversing the WAN twice.  Too much bandwidth would be used.  So, that implies the need to use Flex / HREAP mode instead.
    (2) However, re-designing each site's IP environment for MediaNet would suggest to go routed to the closet.  However, this breaks seamless roaming for users....
    So, this conundrum is why I thought I'd post here, and see if there was some other cool / nifty solution I wasn't yet aware of. 
    The only other (possibly friendly to both needs) solution I'd thought of was to GRE tunnel a subnet from each closet to the collapsed Core / Disti switch at each site.  Unfortunately, GRE tunnels are not supported in the rev of IOS on the present equipment, and so it isn't possible to try this idea.
    Another "blue sky" idea I had (not for this customer, but possibly elsewhere in the future), is to use LAN switches such as 3850s that have WLC functionality built-in.  I haven't yet worked with the WLC s/w available on those, but I was thinking it looks like they could be put into a mobility group, and L3 user roaming between them might then work.  Do you happen to know if this might be a workable solution to the overall big-picture problem? 
    Thanks again for taking the time and trouble to reply!
    Deb

  • Best Practice for Managing a BPC Environment?

    My company is currently running a BPC 5.1 MS environment and will soon be upgrading to version 7.0 MS.  I was wondering if there is a white paper or some guidance that anyone can give with regard to the best practice for managing a BPC environment.  Which brings to light several questions in my mind:
    1.  Which department(s) in a company should u201Cownu201D the BPC application? 
    2. If both, whatu2019s SAPu2019s recommendation for segregation of duties?
    3. What roles should exist within our company to manage BPC?
    4. What type(s) of change control is SAPu2019s u201CBest Practiceu201D?
    We are currently evaluating the best way to manage the system across multiple departments, however there is no real business ownership in the system, which seems to be counter to the reason for having BPC as a solution in the first place.
    Any guidance on this would be very much appreciated.

    My company is currently running a BPC 5.1 MS environment and will soon be upgrading to version 7.0 MS.  I was wondering if there is a white paper or some guidance that anyone can give with regard to the best practice for managing a BPC environment.  Which brings to light several questions in my mind:
    1.  Which department(s) in a company should u201Cownu201D the BPC application? 
    2. If both, whatu2019s SAPu2019s recommendation for segregation of duties?
    3. What roles should exist within our company to manage BPC?
    4. What type(s) of change control is SAPu2019s u201CBest Practiceu201D?
    We are currently evaluating the best way to manage the system across multiple departments, however there is no real business ownership in the system, which seems to be counter to the reason for having BPC as a solution in the first place.
    Any guidance on this would be very much appreciated.

  • Best Practices for AD and Windows Environment

    Hello Everyone,
    I need to create a document having the best practices for AD containing best practices for DNS, DHCP, AD Structure, Group Policy, Trust Etc.
    I just need the best practices irrespective of what is implemented in our company.
    I just need to create a document for analysis as of now. I searched over the internet but could not find much. I would request you all to pour in your suggestions from where i can find those.
    If anyone could send me or point me the link. I am pretty new to the technology, so need your help.
    Thanks in Advance

    I have an article where I shared the best practices to use to avoid known AD/DNS issues: http://www.ahmedmalek.com/web/fr/articles.asp?artid=23
    However, you need first to identify your requirements and based on these requirements, you can identify what should be implemented on your environment and how to manage it. The basics here is that you need to have at least two DC/DNS/GC servers per AD domain
    for the High Availability. You need also to take a system state backup of at least one DC/DNS/GC server in your domain. As for DHCP, you can use 50/50 or 80/20 DHCP rule depending on your setup.
    You can also refer to that: https://technet.microsoft.com/en-us/library/cc754678%28v=ws.10%29.aspx
    This posting is provided AS IS with no warranties or guarantees , and confers no rights.
    Ahmed MALEK
    My Website Link
    My Linkedin Profile
    My MVP Profile

Maybe you are looking for

  • Preciso de ajuda com gravação de voz

    Preciso de ajuda com gravação de voz - fiz várias gravações de uma hora cada e todas funcionam bem. No entanto tenho uma gravação de quase duas horas que não consigo nem ouvir nem exportar para o pc. Preciso MUUUUUIIIIITO acessar esta gravação.. Por

  • SRM-MDM Catalog image mapping

    Dear All, I am trying to map image field but the field does not appears on the web dynpro user interface, can anyone advise me how to map image field in order to display the images in SRM. And I am using standard Repository SRM-MDM. Regards, Aklilu

  • Since I can't view my points history...​..

    Can a specialist please let me know how many points I have pending and an estimated date of when I'll be receiving my points since I can't view anymore on bestbuy.com. I want to know if I'll be receiving points for a pre order and a purchase that I m

  • Date Logic

    Hello Expers, I have crystal report on top of BW Query. I need to calculate the Business Days when I refresh the report. For Example: If I Refresh my report today it should give me the business till today for that month(need to exclude Saturday and S

  • Rsh disconnect problems in Solaris 10

    My company uses rsh to transmit data to proprietary cards, from Ultra 25's running Solaris 10. We're experiencing random timeouts. The error is rsh connection timeout. We never had this problem on Solaris 8 or Solaris 7. Has anyone run across this be