Dreamweaver 8 - Test/Production Server Question and Working Remotely

Hello Everyone,
Right now I use Dreamweaver 8 at work with no problems. My
computer at work is also my test server. I do everything from that
one machine and then upload/put the files to our production server
once I know they are working correctly. So basically my work
computer/server is a mirror of our production server. My question
is this:
Right now I use either Remote Administrator (Radmin) or
Remote Desktop, when working from home, to connect to my work
computer/test server (same computer). This is kind of slow trying
to move around and edit/develop. Is it possible to use Dreamweaver
8 and WebDAV or RDS to allow me to edit/create the files directly
on my work machine from home and then when I PUT the file have it
go to my production server? So basically nothing changes about how
I do it except that I am doing it remotely. Right now it downloads
the files to my computer and when I PUT them it goes only to my
test server only. I like to it to save the files to my test server
and put the files to my production server.
Any help would be greatly appreciated. Maybe I am just not
setting it up correctly.

This can’t be done as RDS is about working with and
synchronising to your server.
You may be able to set up two sites (test server and live
server) with the same local root folder depending how you are set
up but at some point you will create synchronisation issues.
My solution is to separate the two processes, all work is
done to the test server. When changes are ready to go live they are
FTP’ed from the test server to the live server using a
separate FTP program and a snap shot of the site is taken and
backed up with the FTP log. Although it seems to be adding to the
workflow it does ensure live changes are conscious changes and
backup (role back ability) is maintained.

Similar Messages

  • Difference between Terminal server connection and MSTSC remote desktop connection

    Hi everyone , Wats the main difference between Terminal server connection and MSTSC remote desktop connection? Well both are used to connect a server remotely. right!
    Are there any much difference between them?

    RDP is a protocol and MSTSC is the RDP client.
    In the situation you are referring to, you use MSTSC (RDP Client) in both cases, but when you connect to a client PC (Windows 7, Windows XP, etc) you can only connect if nobody is using that PC at the moment, if you connect to a server you can work along with
    some other users. Server OS supports 2 connections by default, if you want more people you need Remote Desktop Services and RDS licenses.
    Thanks,
    сила в справедливости

  • Bundle export/import questions and work arounds

    Hello,
    I've begun to use bundles to migrate components from our test server to our production server. Bundles are a great feature which i missed the last time i worked with N1SPS a few years back. But i've found a few things that i had to work around.
    1. It would be very nice to be able to set version to something like "latest" in the bundle, so i always get the latest version of all components when i do an export.
    2. The bundle should be able to export in a certain order, first folder then components, then containers and lastly plans. Today i have to add everything to the bundle in this order or change the XML-file in the jar afterwards. A function to change the order in the bundle would also help.
    3. Then the bundle is imported the container reefers to components of version 1.1 for some reason (in this case a directory), but it will import that component to version 1.0? I had to change the version reference in the XML to get this to work.
    Any workaround or is it all RFE:s and bugfixes?
    Regards
    Henrik

    Thank you for your fast reply. Is (1) something that could be object for a RFE? I think it's a common work flow to export the latest versions from a development server to a production server.
    Regards
    Henrik

  • PHP/MySQL Test/Production Server best practices

    Hello,
         I am currently learning PHP/MySql and have setup a test server to develop on and a production server to go live with. I wanted to know what are the best practices for synchronizing the test server with the production. Should I export the database from the test server and import it to the production server each time I make a change or is there better way to incrementally sync the databases. I am using Dreamweaver to design the web site.
    Thanks,
    Nick

    Thanks, but does this mean that after I go live I should make changes on the production database only and not use the development database, if say I need to add a new table or record(s)?
    Procedure
    1. Take production web site down
    2. Export/Save current database
    3. Make changes to production database
    4. Export/Save new database
    5. Bring production web site up
    Is this correct?

  • CD-ROM not detected on notebook, but tested on another machine and works.

    I am checking my dad's laptop, a Compaq Presario F700 (F761US). It has a couple of problems, but the main one is that the DVD drive is not detected by Windows Vista Home Premium 32 bit. The BIOS doesn't seem to detect it either when I put a bootable disc and try to force it to boot from the drive; it only detects the hard drive.
    I have even removed the drive and tested on another machine and it DOES in fact work.
    I even upgraded the BIOS to the latest version (F.0.A) to no avail.

    Hello MysticalChicken.  I understand you're having some trouble with your CD-ROM.
    First, install those service packs!  Those are very important updates.  
    Have you tried cleaning the inside of the CD-ROM with canned air?
    The "whirring/clicking" noise leads me to believe this is a mechanical problem.  The click could very well be the disc touching something while it is spinning at a high speed.  Let me know if cleaning the CD-ROM has any effect on the noise.
    Have a wonderful day and I'll keep an eye out for your reply.
    Please click the white star under my name to give me Kudos as a way to say "Thanks!"
    Click the "Accept as Solution" button if I resolve your issue.

  • Cannot install or manage Server 2012 R2 RDS server locally but works remotely

    I am working with a Server 2012 R2 standard machine and attempting to get Remote Desktop Services installed and configured on it. Using the Add Roles and Features wizard while logged on locally to the server in question resulted in the error
    “Unable to connect to the server by using Windows PowerShell remoting.” However, if I use a different Server 2012 R2 machine to run the Add Roles and Features wizard remotely targeted
    at the original server then I can successfully get RDS installed.
    Also, after the installation has completed I cannot manage RDS locally on the server but can successfully manage it remotely from another Server 2012 R2 box. When attempting to use Server Manager locally and choose the Remote Desktop Services menu the error
    message "A Remote Desktop Services deployment does not exist in the server pool."
    The server appears to be functioning correctly and can be managed remotely just not locally. I can reproduce the behavior on other Server 2012 boxes in the environment.
    What would cause local install and management to fail but remote management work?

    Hi,
    Have you added the RDS server under server manager ADD server? Does it show the server in server list?
    Check whether there is any wrong IP address\hostname entry occurs under DNS record which looks\points the DNS entry successfully. 
    Add Servers to Server Manager
    https://technet.microsoft.com/en-in/library/hh831453.aspx
    Hope it helps!
    Thanks.
    Dharmesh Solanki
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Pre-install HYPER-V VMs, then Move to Production Server - Questions

    Hi All,
    I am currently taking over IT Admin duties for a new client, and will be downscaling their bloated 20 server environment to a 5 server Hyper-V environment (about 4 VMs per server).  It will be a complicated, and rather time consuming/time sensitive
    process, given that as soon as I am handed the Admin passwords, I am expected to transition the entire network to a new DC server VM, a seperate Exchange VM, and seperate DirectAccess VM over a 3 day weekend.
    I was thinking about getting started on this early, and was planning on setting up and configuring the DC and Exchange Server VMs on an old spare Dell server I have with my licensed copy of Server Standard 2012 R2 with Hyper V role enabled.  Then,
    when it comes time to transition them over the 3 day weekend, it would be a simple matter of copying the HYPER-V Folder (containing Virtual Hard Disks & Virtual Machines folders) from the spare Dell Server to a portable USB drive that will be brought to
    client's site.  Then at client's site, I would erase and install Server Standard 2012 R2 Hyper V on one of the client's 20 servers (probably the best of the bunch), and then copy the HYPER-V folder containing the DC & Exchange server from
    the portable USB drive to it.  Then I would simply import the DC and Exchange VMs via the Hyper V Manager.
    My two questions are as follows:
    1) Just to be sure, given the DC and Exchange VMs are virtualized, it will make no difference to transitioning from an old crappy Dell server to a rather new modern Dell server.  I know performance will be better on the new server, but there will be
    no complications with the VM migration (aside from allocating more Cores, Memory, & Virtual NIC)...yes?
    2) The client's Windows Server licenses - client is currently under an Enterprise Subscription license with Microsoft (either Open Value or Open Business - dont know yet). For the time being (before the 3 day transition), can I get away with using a 180
    day trial editions of Server Standard 2012 R2 and Exchange 2013 to setup and configure the VMs, and then activate the trial editions later using their existing Enterprise Keys?  Or will this not work because of some incompatible reason? 
    There are over 200 users and 300 email accounts, so I would really prefer to configure all this ahead of time if I can, but dont want to find out last minute or on day of migration that my Enterprise Key wont activate because I used the trial editions and
    I have to reinstall/reconfigure all over again.  Is this possible or is there another, better, recommended way?
    Ultimately, I would like to get as much done ahead of time, so that then the major issues on migration day will be data migration (files, folders, and email) and the reinstallation of all 100 workstations (via Windows Deployment Services - yeeah baby!!!).  
    Does this sound possible?  Any advise would be greatly appreciated.

    I see only one problem with your plan: Once you promote Windows Server to be a domain controller, you can't change its license key. That means that you won't be able to go from trial to licensed on that one. Since you're talking about transitioning from
    a physical DC to a virtualized DC, what I've done in the past is create a temporary virtualized domain controller with a trial key. Then, you can use all the AD tools to decommission and remove the original and then rebuild it from scratch as a virtual machine
    with the proper license key later. That lets you re-use its IP -- and, while riskier, even its name -- if you want, although you need to ensure you do an extra thorough job of cleaning out all the remnants of the original first (AD Sites & Services and
    ADSIEdit are your friends). Once you're satisfied with the rebuilt DC, you can decomm the temporary DC at your leisure, or keep it for redundancy if they have sufficient licensing. Pay special attention to relocating DNS and DHCP, etc.
    Everything else looks sound. Exchange migrations aren't my forte, though.
    Eric Siron
    Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.

  • List all printers configured from server and local machine when hosted in server machine and working from Client machine

    Hello Team
    I am developing web application in asp.net. When application run from IIS(hosted in server machine running from client machine using web browser) application should list all the configured printers in local and network.
    can any one please let me know the process, in this i was able to load printers from server, so need to load printers from local machine.
    Thanks in Advance
    Bindu

    Hi Bindu,
    I am developing web application in asp.net. When application run from IIS(hosted in server machine running from client machine using web browser) application should list all the configured printers in local and network.
    From this message, this is a web application in asp.net, and it also related to IIS.
    You should post in the dedicated ASP.Net Forum
    http://forums.asp.net
    For IIS issue, http://forums.iis.net/ IIS forum should post.
    Thanks
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Deploying portal from development to production server

    Scenario:
    - Development server running OracleAS Standard Edition One.
    - Production server running OracleAS Standard Edition One
    Idea is to develop portal on development server, and, once satisfied and tested, deploy to production server.
    Questions
    1. Is this the right development model? If not, what is the recommended approach?
    2. Would deployment take place via Enterprise Manager? If so, would the development portal have to be turned into an EAR file, and then pulled in to the production server? Or can the two servers be managed within a single instance of Enterprise Manager (on the production server?), and if so, how would the deployment model work?
    Thanks for the input.

    Hi there,
    A1: Yes, I believe so, but will depend actually on what you wish to do.
    A2: Depends how is your application made. I don't think you'll be using EM to make the deployment as this would be a Portal. So you may need to either use the cloning process of the iAS or the RDBMS utilities (where Metalink may help you further depending on the version of the Portal you are in)... or use the Portal Export / Import feature to pass a Page Group from one side to another (this will work only one-way - please refer to the Portal Configuration Guide chapter 10).
    As to the "two servers be managed within a single instance of Enterprise Manager"... why not? You can have two RepCA dbs hooked to the same iAS infrastructure to be managed. If I'd recommend it this way... maybe not... remember that if it will be a Development env and a Production env... I'd recommend them to be really separated (just playing on the safe side... maybe :-) ).
    I hope it helps...
    Cheers,
    Pedro.

  • ESS Leave request screen giving a critical Error in production server

    Friends,
    We are in a critical face of ESS implemetation.
    We are doing an ESS MSS implementation for country grouping 99.
    When we moved our changes to production server after succesful testing in quality, getting the following Critical error for Leave Request Screen.
    java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
         at java.util.ArrayList.RangeCheck(ArrayList.java:512)
         at java.util.ArrayList.get(ArrayList.java:329)
         at com.sap.aii.proxy.framework.core.JcoBaseList.get(JcoBaseList.java:272)
         at com.sap.aii.proxy.framework.core.AbstractList.get(AbstractList.java:230)
         at com.sap.tc.webdynpro.modelimpl.dynamicrfc.DynamicRFCList.get(DynamicRFCList.java:281)
         at com.sap.tc.webdynpro.progmodel.context.Node$ModelElementList.getElement(Node.java:2543)
         at com.sap.tc.webdynpro.progmodel.context.Node.getElementAtInternal(Node.java:621)
    Other areas like personal Info & who is who are working fine.
    Leave request was working fine in Development and Quality servers and it never worked in Production server.
    It worked fine with same config, with same master data and same employee & org structure in quality server.
    We tried the following things:
    1. Checked and confirmed the sequence of transports for configs and Developments to Quality and Production.
    Even compared the table level entries and ABAP codings B/n dev and Production. All are same.
    2. Moved the workflow changes to production and activated the same. No change found after that.
    3. Gave SAP all authorization in R/3 and full authorization from portal side as well.
    4. Assigned the userid to different employees and checked the masterdata of employees.
    5. Checked the note 1388426.Every thing mentioned in the note is there in the system.
    6. Verified Rule groups and WEBMO feature are correct and same as in quality.
    As our go live date is very near, request your help .Thanks in advance for your help.
    Regards,

    Customisation of Leave request is mising in your system, please check the rule group using PTARQ.

  • Charm Impact on changing the production server

    Hi All,
    we currently using Charm for a three system ECC landscape.
    Now currently so many changes are there in various stages.
    now , the team is planning to change the production server to high availabitlity server.
    there will not be any change in the transport route and all, only instead of existing server, the new server details will be added.
    I just wanted to know will there be any major impact in the current chagnes.
    kindly advice.
    Thanks,
    Subhashini.

    Hi
    its like system migration to the new server? or it is by client export?
    can you refer the thread [How to replicate Existing running Production server.|How to replicate Existing running Production server.]
    and read the note [Note 1395727 - ChaRM project does not work after product version changed|https://websmp130.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=1395727]
    more further on
    [Note 1394820 - SMSY: Converting SAP ECC system to SAP ERP system|https://websmp230.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=0001394820&nlang=E]
    check and update
    jansi

  • Sudden Kernal Panics with Production Server

    I was called in today because a departmental supervisor had to manually restart their production server today and the application that ran on it didn't restart properly. After some looking and then another controlled restart to get the application going again, the server kernal paniced. From actualy examination, it appears this is what happened the first time.
    As the server is in production mode, I've not taken any steps with Dr. Smoke's troubleshooting list, but I have pulled the panic.log and I'm posting it here. If someone has any clue where I should start, that would be helpful!
    Wed Nov 16 13:58:32 2005
    panic(cpu 0 caller 0x000E7FF4): vnode_put(1d95210): iocount < 1
    Latest stack backtrace for cpu 0:
    Backtrace:
    0x00095698 0x00095BB0 0x0002683C 0x000E7FF4 0x000E7FA0 0x000D4A18 0x002A9BF4 0x000ABE30
    0xFF5DB3C2
    Proceeding back via exception chain:
    Exception state (sv=0x2F8F6280)
    PC=0x9001C240; MSR=0x0000F030; DAR=0xE0E9CB38; DSISR=0x02200000; LR=0x90B14554; R1=0xF1425710; XCP=0x00000030 (0xC00 - System call)
    Kernel version:
    Darwin Kernel Version 8.3.0: Mon Oct 3 20:04:04 PDT 2005; root:xnu-792.6.22.obj~2/RELEASE_PPC
    Could this be a processor flaking out? I'm sure I'll have to do this after hours, so any head start I can get would be greatly appreciated!
    PowerBook G4 17   Mac OS X (10.3.9)   1.5G RAM

    Hi, L.
    The panic cited, i.e.
    panic(cpu 0 caller 0x000E7FF4): vnode_put(1d95210): iocount < 1</tt>
    implies something is amiss either with affected Mac's hard drive directory (potentially affecting either file information or virtual memory swap files) or your RAM. Processor or programming errors are also possible, but I'd say less likely.
    A vnode is defined here as "An in-memory data structure containing information about a file."
    The particular message sited in the panic comes from the vfs_subr.c component of the operating system, concerned with "External virtual filesystem routines."
    I suggest you troubleshoot this off-hours using my "Resolving Kernel Panics" FAQ, as you noted. This FAQ includes step-by-step instructions for identifying and resolving some of the most common causes of kernel panics.
    My FAQ is a roadmap: start at the beginning and work through to the end, following the instruction in the order specified, including the "If all else fails..." section if a cause or resolution is not found in an earlier troubleshooting step therein.
    If you look at this Google search you'll find some similar reports that find issues with hard drive files, RAM, bad NIC, and related things that the troubleshooting steps in my FAQ would help you isolate. However, follow my FAQ step-by-step rather than skipping around as it's designed to find high-probability / low-effort causes up front.
    Good luck!
    Dr. Smoke
    Author: Troubleshooting Mac® OS X
    Note: The information provided in the link(s) above is freely available. However, because I own The X Lab™, a commercial Web site to which some of these links point, the Apple Discussions Terms of Use require I include the following disclosure statement with this post:
    I may receive some form of compensation, financial or otherwise, from my recommendation or link.

  • Transporting object from development to production server

    Hi Everyone,
    Could you plz tell me how can i transport a object from development server to production serevr if that object is local in production server. And also could u tell me if the transfer structure is replaced if the names are different in both the server???

    Could you plz tell me how can i transport a object from development server to production serevr if that object is local in production server.
    Case 1: Object is in Prod, but not in Dev. and you want to align the systems through transports.
    You will have to send it to D (which could be done by changing transport routes & then following normal transport procedure) & then will have to bring it back with overwrite originals.
    Case 2: Object is present both in Prod & Dev. 
    You can send the transports D->P with Overwrite originals (following the transport procedures).
    And also could u tell me if the transfer structure is replaced if the names are different in both the server???
    If technical names are different in both the servers, the transfer structure will not be overwritten.

  • Urgent:Rebooted the Production Server 3 times in a Day.

    Hi,
    We are facing a problem on our production server. And to start a nomal Functionality, we have rebooted the server,but it didn't helped as it is requiring multiple reboot.
    Our OS is on windows 2000 Server.
    and the alert file contains error like this
    ORACLE V10.2.0.4.0 - Production vsnsta=0
    vsnsql=14 vsnxtr=3
    Windows 2000 Version V5.0 Service Pack 4
    CPU : 4 - type 586
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:5718M/6141M, Ph+PgF:7797M/8001M, VA:1923M/2047M
    Tue Dec 23 12:54:53 2008
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Picked latch-free SCN scheme 2
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =18
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    Tue Dec 23 12:55:08 2008
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.4.0.
    System parameters with non-default values:
    processes = 150
    __shared_pool_size = 914358272
    __large_pool_size = 8388608
    __java_pool_size = 16777216
    __streams_pool_size = 0
    sga_target = 1610612736
    control_files = E:\QUANTIS\CONTROL\CONTROL01.CTL, E:\QUANTIS\CONTROL\CONTROL02.CTL
    db_block_size = 8192
    __db_cache_size = 662700032
    compatible = 10.2.0.3.0
    log_archive_dest = E:\ARCHIVES
    log_archive_format = QNTS_%t_%s_%r.log
    db_file_multiblock_read_count= 16
    undo_management = AUTO
    undo_tablespace = UNDOTBS1
    remote_login_passwordfile= EXCLUSIVE
    db_domain =
    dispatchers = (PROTOCOL=TCP) (SERVICE=QNTSV4PRXDB)
    job_queue_processes = 10
    cursor_sharing = FORCE
    audit_file_dest = E:\QUANTIS\ADUMP
    background_dump_dest = E:\QUANTIS\BDUMP
    user_dump_dest = E:\QUANTIS\UDUMP
    core_dump_dest = E:\QUANTIS\CDUMP
    db_name = QNTSV4PR
    db_unique_name = QNTSV4PROD
    open_cursors = 300
    pga_aggregate_target = 78643200
    PMON started with pid=2, OS id=2356
    MMAN started with pid=4, OS id=2380
    DBW0 started with pid=5, OS id=2416
    LGWR started with pid=6, OS id=2420
    CKPT started with pid=7, OS id=2424
    SMON started with pid=8, OS id=2428
    RECO started with pid=9, OS id=2436
    CJQ0 started with pid=10, OS id=2440
    MMON started with pid=11, OS id=2444
    MMNL started with pid=12, OS id=2448
    Tue Dec 23 12:55:10 2008
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    starting up 1 shared server(s) ...
    PSP0 started with pid=3, OS id=2376
    Tue Dec 23 12:55:11 2008
    alter database mount exclusive
    Tue Dec 23 12:55:16 2008
    Setting recovery target incarnation to 1
    Tue Dec 23 12:55:16 2008
    Successful mount of redo thread 1, with mount id 233636527
    Tue Dec 23 12:55:16 2008
    Database mounted in Exclusive Mode
    Completed: alter database mount exclusive
    Tue Dec 23 12:55:16 2008
    alter database open
    Tue Dec 23 12:55:17 2008
    Beginning crash recovery of 1 threads
    parallel recovery started with 3 processes
    Tue Dec 23 12:55:18 2008
    Started redo scan
    Tue Dec 23 12:55:18 2008
    Completed redo scan
    742 redo blocks read, 139 data blocks need recovery
    Tue Dec 23 12:55:18 2008
    Started redo application at
    Thread 1: logseq 1284, block 44127
    Tue Dec 23 12:55:18 2008
    Recovery of Online Redo Log: Thread 1 Group 3 Seq 1284 Reading mem 0
    Mem# 0: E:\QUANTIS\DATA\REDO03.LOG
    Tue Dec 23 12:55:18 2008
    Completed redo application
    Tue Dec 23 12:55:19 2008
    Completed crash recovery at
    Thread 1: logseq 1284, block 44869, scn 11774193
    139 data blocks read, 115 data blocks written, 742 redo blocks read
    Tue Dec 23 12:55:21 2008
    LGWR: STARTING ARCH PROCESSES
    ARC0 started with pid=20, OS id=3520
    ARC1 started with pid=17, OS id=3532
    Tue Dec 23 12:55:21 2008
    ARC0: Archival started
    ARC1: Archival started
    LGWR: STARTING ARCH PROCESSES COMPLETE
    Tue Dec 23 12:55:22 2008
    Thread 1 advanced to log sequence 1285 (thread open)
    Thread 1 opened at log sequence 1285
    Current log# 1 seq# 1285 mem# 0: E:\QUANTIS\DATA\REDO01.LOG
    Successful open of redo thread 1
    Tue Dec 23 12:55:22 2008
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    Tue Dec 23 12:55:22 2008
    ARC1: Becoming the heartbeat ARCH
    Tue Dec 23 12:55:22 2008
    MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
    Tue Dec 23 12:55:22 2008
    SMON: enabling cache recovery
    Tue Dec 23 12:55:24 2008
    Successfully onlined Undo Tablespace 1.
    Tue Dec 23 12:55:24 2008
    SMON: enabling tx recovery
    Tue Dec 23 12:55:24 2008
    Database Characterset is WE8MSWIN1252
    Opening with internal Resource Manager plan
    where NUMA PG = 1, CPUs = 4
    replication_dependency_tracking turned off (no async multimaster replication found)
    Starting background process QMNC
    QMNC started with pid=21, OS id=416
    Tue Dec 23 12:55:28 2008
    Completed: alter database open
    Tue Dec 23 13:18:29 2008
    Warning: skgmdetach - Unable to register unmap, error 4210
    Tue Dec 23 13:20:39 2008
    Process J001 died, see its trace file
    Tue Dec 23 13:20:39 2008
    kkjcre1p: unable to spawn jobq slave process
    Tue Dec 23 13:20:39 2008
    Errors in file e:\quantis\bdump\qntsv4pr_cjq0_2440.trc:
    Tue Dec 23 13:23:29 2008
    Warning: skgmdetach - Unable to register unmap, error 4210
    Tue Dec 23 13:26:45 2008
    Errors in file e:\quantis\bdump\qntsv4pr_cjq0_2440.trc:
    ORA-27300: OS system dependent operation:WaitForSingleObject failed with status: 0
    ORA-27301: OS failure message: The operation completed successfully.
    ORA-27302: failure occurred at: sssxcpttcs5
    Tue Dec 23 13:26:45 2008
    Process J000 died, see its trace file
    Tue Dec 23 13:26:45 2008
    kkjcre1p: unable to spawn jobq slave process
    Tue Dec 23 13:26:45 2008
    Errors in file e:\quantis\bdump\qntsv4pr_cjq0_2440.trc:*
    Tue Dec 23 13:28:29 2008
    Warning: skgmdetach - Unable to register unmap, error 4210
    Tue Dec 23 13:28:41 2008
    Errors in file e:\quantis\bdump\qntsv4pr_psp0_2376.trc:
    ORA-27300: OS system dependent operation:WaitForSingleObject failed with status: 0
    ORA-27301: OS failure message: The operation completed successfully.
    **ORA-27302: failure occurred at: sssxcpttcs5**
    Dump file e:\quantis\bdump\alert_qntsv4pr.log
    Tue Dec 23 13:34:46 2008
    ORACLE V10.2.0.4.0 - Production vsnsta=0
    vsnsql=14 vsnxtr=3
    Windows 2000 Version V5.0 Service Pack 4
    CPU : 4 - type 586
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:5721M/6141M, Ph+PgF:7752M/8001M, VA:1923M/2047M
    Tue Dec 23 13:34:46 2008
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Picked latch-free SCN scheme 2
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =18
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.4.0.
    System parameters with non-default values:
    processes = 150
    __shared_pool_size = 914358272
    __large_pool_size = 8388608
    __java_pool_size = 16777216
    __streams_pool_size = 0
    sga_target = 1610612736
    control_files = E:\QUANTIS\CONTROL\CONTROL01.CTL, E:\QUANTIS\CONTROL\CONTROL02.CTL
    db_block_size = 8192
    __db_cache_size = 662700032
    compatible = 10.2.0.3.0
    log_archive_dest = E:\ARCHIVES
    log_archive_format = QNTS_%t_%s_%r.log
    db_file_multiblock_read_count= 16
    undo_management = AUTO
    undo_tablespace = UNDOTBS1
    remote_login_passwordfile= EXCLUSIVE
    db_domain =
    dispatchers = (PROTOCOL=TCP) (SERVICE=QNTSV4PRXDB)
    job_queue_processes = 10
    cursor_sharing = FORCE
    audit_file_dest = E:\QUANTIS\ADUMP
    background_dump_dest = E:\QUANTIS\BDUMP
    user_dump_dest = E:\QUANTIS\UDUMP
    core_dump_dest = E:\QUANTIS\CDUMP
    db_name = QNTSV4PR
    db_unique_name = QNTSV4PROD
    open_cursors = 300
    pga_aggregate_target = 78643200
    MMAN started with pid=4, OS id=2020
    PMON started with pid=2, OS id=2012
    DBW0 started with pid=5, OS id=584
    LGWR started with pid=6, OS id=2028
    CKPT started with pid=7, OS id=2032
    SMON started with pid=8, OS id=2036
    RECO started with pid=9, OS id=2040
    CJQ0 started with pid=10, OS id=2044
    MMON started with pid=11, OS id=2048
    Tue Dec 23 13:34:50 2008
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    MMNL started with pid=12, OS id=2052
    Tue Dec 23 13:34:50 2008
    starting up 1 shared server(s) ...
    PSP0 started with pid=3, OS id=2016
    Tue Dec 23 13:35:04 2008
    alter database mount exclusive
    Tue Dec 23 13:35:09 2008
    Setting recovery target incarnation to 1
    Tue Dec 23 13:35:09 2008
    Successful mount of redo thread 1, with mount id 233634568
    Tue Dec 23 13:35:09 2008
    Database mounted in Exclusive Mode
    Completed: alter database mount exclusive
    Tue Dec 23 13:35:09 2008
    alter database open
    Tue Dec 23 13:35:10 2008
    Beginning crash recovery of 1 threads
    parallel recovery started with 3 processes
    Tue Dec 23 13:35:10 2008
    Started redo scan
    Tue Dec 23 13:35:10 2008
    Completed redo scan
    251 redo blocks read, 36 data blocks need recovery
    Tue Dec 23 13:35:11 2008
    Started redo application at
    Thread 1: logseq 1285, block 86994
    Tue Dec 23 13:35:11 2008
    Recovery of Online Redo Log: Thread 1 Group 1 Seq 1285 Reading mem 0
    Mem# 0: E:\QUANTIS\DATA\REDO01.LOG
    Tue Dec 23 13:35:11 2008
    Completed redo application
    Tue Dec 23 13:35:11 2008
    Completed crash recovery at
    Thread 1: logseq 1285, block 87245, scn 11803216
    36 data blocks read, 36 data blocks written, 251 redo blocks read
    Tue Dec 23 13:35:13 2008
    LGWR: STARTING ARCH PROCESSES
    ARC0 started with pid=19, OS id=3520
    ARC1 started with pid=20, OS id=3524
    Tue Dec 23 13:35:13 2008
    ARC0: Archival started
    ARC1: Archival started
    LGWR: STARTING ARCH PROCESSES COMPLETE
    Tue Dec 23 13:35:14 2008
    Thread 1 advanced to log sequence 1286 (thread open)
    Thread 1 opened at log sequence 1286
    Current log# 2 seq# 1286 mem# 0: E:\QUANTIS\DATA\REDO02.LOG
    Successful open of redo thread 1
    Tue Dec 23 13:35:14 2008
    MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
    Tue Dec 23 13:35:14 2008
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    Tue Dec 23 13:35:14 2008
    ARC1: Becoming the heartbeat ARCH
    Tue Dec 23 13:35:14 2008
    SMON: enabling cache recovery
    Tue Dec 23 13:35:17 2008
    Successfully onlined Undo Tablespace 1.
    Tue Dec 23 13:35:17 2008
    SMON: enabling tx recovery
    Tue Dec 23 13:35:17 2008
    Database Characterset is WE8MSWIN1252
    Opening with internal Resource Manager plan
    where NUMA PG = 1, CPUs = 4
    replication_dependency_tracking turned off (no async multimaster replication found)
    Starting background process QMNC
    QMNC started with pid=21, OS id=2456
    Tue Dec 23 13:35:22 2008
    Completed: alter database open
    Tue Dec 23 13:45:07 2008
    Thread 1 advanced to log sequence 1287 (LGWR switch)
    Current log# 3 seq# 1287 mem# 0: E:\QUANTIS\DATA\REDO03.LOG
    Tue Dec 23 13:49:21 2008
    Thread 1 advanced to log sequence 1288 (LGWR switch)
    Current log# 1 seq# 1288 mem# 0: E:\QUANTIS\DATA\REDO01.LOG
    Tue Dec 23 13:59:11 2008
    Thread 1 advanced to log sequence 1289 (LGWR switch)
    Current log# 2 seq# 1289 mem# 0: E:\QUANTIS\DATA\REDO02.LOG
    Please help as its on Production Enviornment

    Hi,
    but sir by setting /3gb paramter , kernel memory will be reduced by 1 GBYou only need 10-20% of total RAM for the kernel!
    It's 4GT! How much RAM is on this server?
    A VERY COMMON issue with Oracle 32-bit Windows is that the server has 8 gig RAM, but the SGA only uses 2 gig, leaving huge wasted RAM areas.
    See these important notes for sizing your SGA and PGA:
    http://www.dba-oracle.com/art_dbazine_ram.htm
    Also note that you do not have to use $GT, you can also use AWE instead:
    http://www.dba-oracle.com/oracle_tips_ram_waste.htm
    does it mean windows 32 bit platform is unfit for oracle?All 32-bit systems are constrained by addressable memory (2**32~=2 gig).
    To use larger RAM regions, you must employ special tricks In Windows, 4GT or AWE) to move the data buffers "above the line".
    Hope this helps . . .
    Donald K. Burleson
    Oracle Press author
    Author of "Oracle Tuning: The Definitive Reference"
    http://www.rampant-books.com/book_2005_1_awr_proactive_tuning.htm

  • Flex App with remoting works on local Apache server - fails on production server

    Hi Everyone,
    I have a Flex app that uses Data Services. The application works correctly on my local Mac Server and Apache. When uploaded to my production CentOS server, the Data Services fail. When the app is done loading, the following error message comes up:
    Class "ModelsService" does not exist: Plugin by name 'ModelsService' was not found in the registry; used paths:
    : /www/html/mdubb//PHP2/bin-debug/services/
    #0 /var/www/html/mdubb/ZendFramework/library/Zend/Amf/Server.php(550): Zend_Amf_Server->_dispatch('getAllModels', Array, 'ModelsService')
    #1 /var/www/html/mdubb/ZendFramework/library/Zend/Amf/Server.php(626): Zend_Amf_Server->_handle(Object(Zend_Amf_Request_Http))
    #2 /var/www/html/mdubb/PHP2/bin-debug/gateway.php(73): Zend_Amf_Server->handle()
    #3 {main}
    Where getAllModels is a method of my custom ModelsService.
    I changed the Zend path in the amf_config.ini file so it references the correct directory. If I browse to gateway.php, it prompts to download the file, which I think is correct.
    I added in the config file the path to the services folder.
    I tried adding $server->addClass("ModelServices") in gateway.php, but it didn't like that.
    The file structure on the production server is the same as the local server (I litterally uploaded everything in my local web root), so I can't think of what would be differenet between the two.
    I have already pulled one all nighter trying to get this to run. Do you know what I should troubleshoot next?
    Thanks in advance,
    Ryan

    Hi,
    With reference to Lumira 1.15, the minimal SP we support is BI 4.0 SP6. Please upgrade at least to this. Everything is detailed in the PAM https://websmp107.sap-ag.de/~sapidb/011000358700001095842012E
    Best regards,
    Antoine

Maybe you are looking for