Design issue : how to replace ftp

Hi,
I was given a task to replace an existing application which uses FTPClient
to send a file from one server to another one. The main reason for replacemtn
is because we want to close the port that is currently use by that ftp. In
the future only port 80 will be opened. It means, I need to think about how
to use http protocol for sending the data.
The current application is not only to ftp a file, but it creates dir and
subdir on the destination server. How can I achieve that using http
protocol ? Is this a good idea using http ? Or any a better one ?
Thanks for your suggestions.

To do that using HTTP you would have to have some kind of code running in your server that supported it. Bare HTTP doesn't support it, so you would have to write that code. In the Java world you would write an upload servlet.

Similar Messages

  • Design Issue: How to implement as DAO for limited no of tables.

    Hi Friends,
    I am currently working on one small app.
    I need to design the app the challenge is the interaction of the app will be with 5 db table only.
    In this scenarios i can't use hibernate which will be an overhead.
    Pls suggest me some approach wht should i use to interact with DB or dao implementation.
    Here is the application domain:
    One java main program will read some information using dao layer from oracle db(consisting of 5 table max ) and based on the read rows i want to generate some alerts.
    I am planing to use spring as containner to load dao and other beans directly from main java program as xmlwebappctx.
    I am confussed wht shall i use Spring DAO implementation or direct jdbc call from dao layer or any other thing?
    Hope you understand my qn.
    Thanks
    Novin

    Novin-Jaiswal wrote:
    Hi Friends,
    I am currently working on one small app.
    I need to design the app the challenge is the interaction of the app will be with 5 db table only.since you don't talk about objects, i'll assume you don't have any. in that case, i won't recommend hibernate.
    just write table gateway classes using JDBC to perform CRUD operations for each table.
    >
    In this scenarios i can't use hibernate which will be an overhead.don't see that. hibernate uses JDBC. If it generates the same SQL you don't, it's not an overhead.
    %

  • SOA Design issues and other politics

    Hi all,
    I have a requirement for live data feed from external system. I am using SOA11g and JDeveloper 11g. There are two designs, one proposed and other I have in mind to achieve this.
    1) The external system sends XML data in a push model to the exposed SOA Web Service (uses one-way messaging mode) at my end. I then store the message in the database
    a) In this design how do we keep track of all messages that are sent are received. Is there a better solution.
    2) The third party is proposing a Web Service at their end. The application being real-time (i.e any changes at their DB end i.e some DB tables, should be propogated across to our web services using XML messages). I will have to keep sending XML requests on a regular basis (say every 5 seconds). Can I achieve such type of Web Service client using SOA 11g?
    a) Here I have a design issue, that the data feed is live, why do the WS client have to keep sending requests at regular intervals. Why can't the third party send data whenever there is an update/insert at their database end. Third party is coming up with advantages like loose coupling and making the Web Service more generic. I doubt all the claims give that the applications are B2B and we are the other ones who will be using their web services for the time being. Their may be other two organizations later on.
    b) If the first request is not yet returned, will the second request after 5 seconds be blocked.
    This designs and solutions are becoming quite political across organizations, and got to do with who will take the blame for data issues. I just want a proper SOA design for live data feed. Please suggest the advantages and disadvantages of both if anybody has been through this path.
    Thanks
    Edited by: user5108636 on 1/09/2010 18:19

    See if wireless isolation is enabled.
    When logged into your WRT1900AC using local access replace the end of the browser URL with:
    /dynamic/advanced-wireless.html
    Please remember to Kudo those that help you.
    Linksys
    Communities Technical Support

  • Design issue with the multiprovider

    Design issue with the multiprovider :
    I have the following problem when using my multiprovider.
    The data flow is like this. I have the info-objects IobjectA, IobjectB, IobjectCin my Cube.(Source for this data is s-systemA)
    And from another s-system I am also loading the masterdata for IobjectA
    Now I have created the multiprovider based on the cube and IobjectA.
    However, surprisingly join in not workign in multiprovider correctly.
    Scenario :
    Record from the Cube.
    IObjectA= 1AAA
    IObjectB = 2BBB
    IObjectC = 3CCC
    Records from IobjectA =1AAA.
    I expect the record should be like this :
    IObjectA : IObjectB: IObjectC
    1AAA       :2BBB       :3CCC
    However, I am getting the record like this:
    IObjectA : IObjectB: IObjectC
    1AAA       :2BBB       :3CCC
    1AAA         : #             :#
    In the Identification section I have selected both the entries for IobjectA still I am getting this error.
    My BW Version is 3.0B and the SP is 31.
    Thanks in advance for your suggestion.

    May be I was not clear enough in my first explanation, Let me try again to explain my scenario:
    My Expectation from Multi Provider is :
    IObjectA
    1AAA
    (From InfoObject)
    Union
    IObjectA     IObjectB     IObjectC
    1AAA     2BBB     3CCC
    (From Cube)
    The record in the multiprovider should be :
    IObjectA     IObjectB     IObjectC
    1AAA     2BBB     3CCC
    Because, this is what the Union says .. and the Definition of the multiprovider also says the same thing :
    http://help.sap.com/saphelp_bw30b/helpdata/EN/ad/6b023b6069d22ee10000000a11402f/frameset.htm
    Do you still think this is how the behaviour of the multiprovider.. if that is the case what would be the purpose of having an infoobject in the multiprovider.
    Thank you very much in advance for your responses.
    Best Regards.,
    Praveen.

  • Data mart from two DSOs to one - Loosing values - Design issue

    Dear BW experts,
    I´m dealing with a design issue for which I would really appreciate any help and suggestions.
    I will be as briefly as possible, and explain further based on the doubts , questions I received in order to make it easier go through this problem.
    I have two standard DSOs (DSO #1 and #2) feeding a third DSO (DSO #3), also standard.
    Each transformation DOES NOT include all fields, but only some of them.
    One of the source DSO (let´s call it DSO #1) is uploaded with a datasource that allows reverse type of records  (Record Mode = 'R'). Therefore some updates on DSO #1 comes with one entry with record mode 'R' and a 2nd entry with record mode = 'N' (new).
    Both feeds are delta mode, and not the same entries are updated through each of them, but the entries that are updated can differ (means an specific entry (unique key values)  could be update by one of the feeds, but no updates on the 2nd feed for that entry).
    Issue we have:  When a 'R' and 'N' entries happen in DSO #1 for any entry, that entry is also reversed and re created in the target DSO #3 (even being that not ALL fields are mapped in the transformation), and thefore we loose ALL the values that are exclusively updated through DSO #2, becoming blank.
    I don´t know it we are missing something in our design, or how should we fix this issue we have.
    Hope I was more or less clear with the description.
    ´d really appreciatted your feedback.
    Thanks!!
    Gustavo

    Hi Gustavo
    Two things I need to know.
    1. Do you have any End Routine in your DSO? If yes, what is the setting under "Update behavior of End Routine Display"....Option available right side of Delete Button ater End Rouine.
    2. Did you try with Full Load from DSO1 and DSO2 to DSO3? Do you face the same problem?
    Regards
    Anindya

  • Design issue with sharing LV2 style global between run-time executables

    Hi,
    Just when I though that I had everything figured out, I ran into this design issue.
    The application that I wrote is pretty much a client-server application where the server publishes data and the client subscribes data using data sockets. Once the client gets all the data in the mainClient.vi program, I use LV2 style (using shift registers) to make the data global to all the other sub-vi’s. So the LV2 is in initialize mode in the mainClient.vi program and then in the sub-vi’s the LV2 is in read mode. Also, I had built the run time menu for each sub-vi that when an item is selected from the menu, I would use the get menu selection to get the item tag which will be the file nam
    e of the sub-vi and open the selected sub-vi using vi server. This all worked great on my workstation where I have labVIEW 7.0 Express installed. But the final goal is to make exe’s for each of these sub-vi’s and install runtime on the PC’s that do not have labVIEW installed. Of course when I did that only the mainClient.exe program was getting the updated data from the server but the sub-vi’s were not getting the data from the mainClient.exe. I did realize that the reason for this is due to the fact that I had compiled all the sub-vi’s separately and so the LV2 vi is now local to each executable (i.e. all executables have their own memory location). Also, the run-time menu did not work because now I am trying to open an executable using vi server properties.
    To summarize, is there a way to share LV2 style global's between executables without compiling all of the sub-vi’s at one time? I tried using data-sockets (local-host) instead of LV2 st
    yle gloabls to communicate between the sub-vi’s but I ran into performance issues due to the large volume of data.
    I would really appreciate it if anyone can suggest a solution/alternative to this problem.
    Thanks
    Nish

    > 1)   How would I create a wrap-around for the LV2.vi which is
    > initialized in my mainClient.vi and then how would I use vi server in
    > my sub-vi to refer to that LV2.vi?
    > You mentioned that each sub-vi when opened will first connect to the
    > LV2.vi via via-server and will keep the connection in the shift
    > register of that sub-vi. Does this mean that the sub-vi is accessing
    > (pass-by-reference) the shared memory of the mainClient.vi? If this
    > is what you meant I think that this might work for my application.
    >
    If the LV2 global is loaded statically into your mainClient.vi, then any
    other application can connect to the exe and get a reference to the VI
    using the VI name. This gives you a VI reference you can use to call
    the VI. Ye
    s, the values will be copied between applications. That is
    why you need to add access operations to the global that returns just
    the info needed. If you need the average, do that in the global. If
    you need the array size, do that in the global. Returning the entire
    array shouldn't be a common operation on the LV2 style global anyway.
    > 2) Just to elaborate on my application, the data is
    > transferred via DataSockets from the mainServer.vi on another PC to
    > the client’s PC where the mainClient.vi program subscribes the
    > data (i.e. 5 arrays of double type and each arrays has about 50,000
    > elements). The sub-vi’s will have to access these arrays
    > located on the mainClient.vi every scan. Is there any limitation on
    > referencing the mainClient.vi data via vi-server from each sub-vi?
    Your app does need to watch both the amount of data being passed across
    the network, and the amount being shared between the apps. You might
    want to consider puttin
    g the VIs back into the main app. What is the
    reason you are breaking them apart for?
    Greg McKaskle

  • How to replace null as 0 in  obiee11g pivot table view?

    Hi,
    I am using obiee11g pivot table view,
    I have tried so many views but it's not work. My oracle support team also tired but could not solve it
    1.) used BIN method is that measure columns
    2.) ifnull(column name,0)
    3.) case condition:
    Case when columns is NULL then 0 else column end
    4) data formate override custom formate option
    It seems that the syntax for this custom format is positive-value-mask (semi colon) negative-value-mask (semi colon) null-mask. So this means we have a few options.
    E.g. if you want zeros (0) instead of null then enter:
    #,##0;-#,##0;0
    http://obieeelegant.blogspot.com/2011/06/how-to-replace-null-values-in-obiee.html
    Note:
    I don't want to show strike-rate and custom message in that blank cell. I want to show blank cell(null values) as 0 in the obiee11g pivot table view.
    Obiee10g version is working fine only issues in obiee11g
    Thanks in advance...
    Thanks
    Deva

    I tried this on 11.1.1.6.2 but I can't see good results.
    I choose subject area A-Sample Sales and I choose:
    T02 per Name Month on rows
    P1 Product on columns
    Measure : IFNULL("Simple Calculations"."17 Net Costs", 0)
    Prompted Year 2008
    On product Install and 2008/01 there isn't value...and I put Ifnull in my formula...
    My problem is that I want to do a conditional format when Product = Install. So I have all column with background colour except Null values...so is not pretty...
    If I add custom numeric format I can see 0 but not with background colour...
    I add a picture:
    http://imageshack.us/photo/my-images/600/tablegk.jpg/
    Any help about it??
    Is a bug on obiee??
    Edited by: Alex1 on 04-sep-2012 4:28

  • I have a Windows 7 Professional, 64 bits on my computer and have recently installed iTunes 11.1.3.8. but i started having this msg :"The folder iTunes is on a locked disk or you do not have write permission for this folder" issues: how can i solve it?

    I have a Windows 7 Professional, 64 bits on my computer and have recently installed iTunes 11.1.3.8. but i started having this msg :"The folder iTunes is on a locked disk or you do not have write permission for this folder" issues: how can i solve it?

    Right-click on your main iTunes folder and click Properties, then go to the Security tab and click Advanced. If necessary grant your account and SYSTEM full control of this folder, subfolders and files, then tick the option to replace permissions on child objects which will repair permissions throughout the library. This is the XP dialog but Windows 7 shouldn't be too different.
    If it won't let you change the permissions use the Owner tab to take ownneship from an account with administror privileges.
    tt2

  • How to replace # or Not assigned with blank in BEx Query Output.

    Hi,
    While running the query through BEx Query desginer or Anlayser, I am getting # or Not assigned where there are no values.
    The requirement is to "Replace # or Not assigned with a blank" in the output.
    I want to know, is there any setting in BEx query desginer where we can do this. How to do this.
    Please share your inputs on this. Any inputs on this would be appreciated.
    Thanks,
    Naveen

    Check out SDN-thread: "Re: Remove 'Not assigned'" for more details
    Ideas from SDN research:
    "a solution i have used is to put each RKF column in a CKF colum then in each CKF use RKF + 0, the outcome is that your # should now be 0s, in the query properties you can set the option to display 0s as blank."
    "try to enter a text for the blank entry in the master data maintenance of the relevant objects and set the display option for the objects to 'text'."
    Threads:
    SDN: How to replace # or Not assigned with blank in BEx Query Output.
    SDN: Re: Remove 'Not assigned
    SDN: How to replace # or (Not assigned) with blank in BEx Query Output.
    SDN: Bex Analyzer : Text element system's table ?  
    SDN: change message in web application designer ["nonavailable" ->  136 of SAPLRRSV]
    SDN: Not Assigned ["Not Assigned -> 027 of SAPLRRSV]
    SDN: replacing '#'-sign for 'not assigned' in queries
    SDN: # in report when null in the cube
    SDN: How to replace '#' with blank when there is no value for a date field
    Edited by: Thomas Köpp on Sep 13, 2010 5:20 PM

  • Qosmio G50-10J: How to replace the Optical Disc Drive (ODD)?

    Hello there,
    the optical drive of my Qosmio G 50 is not working properly anymore, it reads only some movie-DVDs, but sadly no videogameDVDs at all, so I cannot instal any games.
    The warranty has already expired too. So I wanted to ask if it is possible to change the drive myself, if it is then how should I do it, and if by doing it would the laptop be in
    serious risk of getting damaged?
    I would be very thankful for every tip and every help you guys can give me!!!
    Best regards,
    a.z.

    Hi
    In different cases the ODD replacement is not very tricky.
    If you want you can check this forum category.
    Here you can find some videos how to replace the ODD on different notebook models:
    http://forums.computers.toshiba-europe.com/forums/forum.jspa?forumID=115
    Here is also a nice Youtube channel which provides the instructions how to replace the ODD:
    http://www.youtube.com/user/toshibaeuropesupport
    But maybe the ODD laser lens is dirty. In my case I could solve similar issue cleaning the laser lens using a cotton-wool tip and alcohol.
    But be careful doing this!

  • How to replace a disk within a parity space

    I have been searching the net for better documentation on this, but so far not a lot of information on what happens when I want to do the following (this is for home use - I am not a storage admin):
    I had a JBOD array with 8 disks in it.  3x3TB and 5x2TB.  I had this array thin provisioned to 45 TiB.  When I hit about 13 TiB of storage, the drive stopped accepting new data and it was time to expand.  This made sense to me as I had
    about 19 TB or 17.3 TiB of room, minus parity, I'm about there.  Perhaps I could have done more to maximize usage, but that point is moot now.  What I did was buy and add a new 4TB disk to the array.
    I had a Storage Pool containing all my disks, and I added this new 4TB disk to the pool.  I had a Virtual Disk sitting on that pool in Single Parity mode.  After adding the drive, I was unable to get the existing virtual disk to use that drive. 
    I imagine this was due to the number of columns previously allocated to the array.  I had some ideas on how to offload the data into a new Virtual Disk with more columns, but nothing really worked, so I bought 3 more 4TB disks, reduced my data footprint
    to 12TB, copied everything off, and destroyed the array.
    Next I re-built my media server and put the original 8 disks in place, along with the fourth new 4TB disk, as well as a 200 GB disk, a 600 GB disk, and a 1.5 TB disk that I had laying around.  I'm in day 2 of my copy operation from BackupDrive1
    to the new 12 disk array.  When that completes, I want to use that disk to replace the 200 GB disk I was using as a placeholder.
    Everything I read on the internet has been inconclusive to me thus far.  Microsoft's own documentation claims that replacing a disk is as simple as removing the old disk and adding the new one, but I don't think it is.  When I tested this
    using the 4TB disk and the other 3 oddballs with a smaller subset of data, it definitely wasn't as easy as that.
    My intuition tells me that what I am supposed to do is power down the server and replace the disk I intend to replace, then add that disk to the now degraded storage pool.  At that point I should remove the old drive from the pool and repair the virtual
    disk.  However, I am nervous that "removing" a drive will decrease the number of columns, and that in the end I won't be able to use the space from the 4TB drive.  I've seen questions on the net on how to replace a disk and none of them
    seem to specifically apply to parity spaces, most are talking about either simple spaces (not happening) and mirror spaces.  I understand the difference but Microsoft's implementation of this is not as straightforward as other products I have used. 
    In other cases it was as simple as this: Physically replace disk, receive prompt that your old disk is missing, but you have a new disk, and would you like to rebuild the data from the old disk on the new disk?  Yes?  Then wait patiently while
    we painfully write 4TB of 1's and 0's and you're good to go.  I want this button in Storage Spaces please.

    Shaon,
    Thanks for your reply.  However, this does not sufficiently answer my question.
    The 200GB, 600 GB and 1.5 TB drives are temporary drives, since from my experience, adding drives to an array with 8 or more drives does not work.  I want to replace these drives with 4TB drives, leaving the smallest drive at 2TB. 
    Also, I was under the assumption that parity (which I'm imagining works in storage spaces somewhat like RAID 50), claims a space for parity equal to the size of the largest drive in the bunch, not the smallest, leaving me with a total space equal to the sum
    of the drives minus 4TB.
    I have experimented with the suggested workflow, but through the GUI this does not work.  I add the new drive, remove the missing one, and get a popup explaining that my virtual disk will be "In Repair".  However, this popup vanishes before I can
    click OK (almost certainly a Microsoft bug).  If I click OK before I finish reading this, nothing happens.  The drive doesn't repair.  I need to retire the disk through PowerShell, and then use PowerShell to repair the virtual disk.  This
    works out OK for a while, but once I start using it, the new drive infallibly gives me an error of "Stale Metadata".  I have tried this with 3 different 4TB drives, so I have doubts that this is actually a hardware issue with the drives.  More than
    likely, this is a false positive caused by an issue with Storage Spaces.  It does give me pause on continuing to use the disk, so I end up swapping the old 200GB disk back in every time and the space works fine at that point.  Right now I am attempting
    to replace the 1.5 TB disk instead, to see if that does anything.  After that, I plan on running the Optimize-Volume -DriveLetter D -Analyze -Verbose cmdlet to attempt to rebalance the data on the disk.  Should I expect that command to do anything
    useful?
    Just a side comment, this product has been around for a solid 2 years now.  I've found the documentation to be almost non-existent, with the exception of one TechNet page that every question seems to have a link to in the answer.  This page helps
    almost no-one as it lacks in examples that apply to the questions that are asked.  It gives little information on parity spaces, and does not equivocally answer the one question that many parity space users have, "What is the correct workflow for growing
    my pool?"  I understand that in a mirrored situation, you can't grow the pool without adding a certain amount of disks, but if parity functions anything like RAID 50, this should not be a limitation.  It amazes me that this is the only document Microsoft
    seems to have regarding Storage Spaces, and this single document has seemingly never been updated.  Take this feedback however you will, I truly hope that more discussion will reach the right people and help Microsoft fix the issues with a product that
    actually has a lot of promise.

  • Design Issue: Localization using Lookup OR Dependency Injection

    Hello Forums!
    I'm having a design issue regarding localization in my application. I'm using Spring Framework (www.springframework.org) as an
    application container, which provides DI (dependency injection) - but the issue is not Spring- but rather design related. All localization
    logic is encapsulated in a separate class ("I18nManager"), which basically is just a wrapper around multiple Java ResourceBundles.
    Right now localization is performed in the "traditional" look-up style, e.g.
    ApplicationContext.getMessage("some.message.key");
    where ApplicationContext is a wrapper around the Spring application context and getMessage(...) is a static method on that
    context. The advantage of that solution is a clean & simple interface design, localization merely becomes a feature of classes, but
    is not part of their public API. The only problem with that approach is the very tight coupling of Classes to the ApplicationContext, which
    really is a problem when you want to use code outside of an application context. The importance of this problem increases if one considers
    that I18N is a concern that can be found in every application layer, from GUI to business to data tier, all those components suddenly depdend
    on an application context being present.
    My proposed solution to this problem is a "Localizable" interface, which may provide mutators for an "I18NManager" instance that can be
    passed in. But is this really a well-designed solution, as almost any object in an application may be required to implement this interface?
    I'm too concerned about performance: the look-up solution does not need to pass references to localizable objects, whereas my proposed solution
    will require 1 I18NManager reference per localizable object, which might cause troubles if you let's say load 10.000 POJOs from some database that
    are all localizable.
    So (finally) my question: how do you handle such design issues? Are there any other solutions out there that I'm not aware of yet? Comments/Help welcome!

    michael_schmid wrote:
    Hello Forums!
    I'm having a design issue regarding localization in my application. I'm using Spring Framework (www.springframework.org) as an
    application container, which provides DI (dependency injection) - but the issue is not Spring- but rather design related. All localization
    logic is encapsulated in a separate class ("I18nManager"), which basically is just a wrapper around multiple Java ResourceBundles.Why do you think you need a wrapper around resource bundles? Spring does very well with I18N, as well as Java does. What improvement do you think you bring?
    Right now localization is performed in the "traditional" look-up style, e.g.
    ApplicationContext.getMessage("some.message.key");
    where ApplicationContext is a wrapper around the Spring application context and getMessage(...) is a static method on that
    context. Now you're wrapping the Spring app context? Oh, brother. Sounds mad to me.
    The advantage of that solution is a clean & simple interface design, localization merely becomes a feature of classes, but
    is not part of their public API. The only problem with that approach is the very tight coupling of Classes to the ApplicationContext, which
    really is a problem when you want to use code outside of an application context. The importance of this problem increases if one considers
    that I18N is a concern that can be found in every application layer, from GUI to business to data tier, all those components suddenly depdend
    on an application context being present.One man's "tight coupling" is another person's dependency.
    I agree that overly tight coupling can be a problem, but sometimes a dependency just can't be helped. They aren't all bad. The only class with no dependencies calls no one and is called by no one. We'd call that a big, fat main class. What good is that?
    Personally, I would discourage you from wrapping Spring too much. I doubt that you're improving your life. Better to use Spring straight, the way it was intended. I find that they're much better designers than I am.
    My proposed solution to this problem is a "Localizable" interface, which may provide mutators for an "I18NManager" instance that can be
    passed in. But is this really a well-designed solution, as almost any object in an application may be required to implement this interface?I would say no.
    I'm too concerned about performance: the look-up solution does not need to pass references to localizable objects, whereas my proposed solution
    will require 1 I18NManager reference per localizable object, which might cause troubles if you let's say load 10.000 POJOs from some database that
    are all localizable.
    So (finally) my question: how do you handle such design issues? Are there any other solutions out there that I'm not aware of yet? Comments/Help welcome!I would use the features that are built into Spring and Java until I ran into a problem. It seems to me that you're wrapping your way into a problem and making things more complex than they need to be.
    %

  • How to replace the certificate of Cisco 2106 wireless LAN controller for CAPWAP ?

    I have interested in CAPWAP feature and I download the open capwap project to make Access Controller (AC) and Wireless Terminal Point (WTP). I had built the AC which used PC and WTP which used Atheros AP. The CAPWAP feature work well when I enabled the CAPWAP that used my own AC  and WTP. When I got the Cisco 2106 wireless LAN controller (Cisco WLC), I configured the Cisco WLC to instead my own AC but I got the authorize fail in Cisco WLC side. It seem the Cisco WLC could not recognize the CAPWAP message which sent form my own WTP. I think this issue just need to synchronize the certificate between Cisco WLC and WTP.So I need to replace the Cisco WLC's certificate manually. Does anyone know how to replace the certificate manually with Cisco WLC ?
    Best Regards,
    Alan

    Unfortunately this Support Community is for Cisco Small Business & Small Business Pro product offerings.  The WLC2106 is a traditional Cisco product.  You can find this type of support on the Cisco NetPro Forum for all traditional Cisco products.
    Best Regards,
    Glenn

  • Dreamweaver CS4 Search & Replace / FTP connectivity / ctrl + B, ctrl + c

    Hi all,
    CS4 Search and Replace / FTP connectivity ISSUES:
    I use Search and replace tools for some large replace operations on live websites.. However when searching and replacing.. Dreamweaver is constantly having to establish a connection between the server for each file I search and replace in.
    Programs like Pspad have this down to a T. Any chance we can have an update to make the search and replace is faster. It's utterly useless at the moment.
    Biggest problem is if the connection to the server fails whilst searching and replacing text in a document.. The file gets removed from the server.. I have lost now 3 website pages because of this. I edit files live because they are within a test environment.. I don't like working locally because I know programs like Pspad are capable of being extremely fast at editing live.
    Because of this issue, I have had to stop using dreamweaver CS4 for now..
    ctrl + B, ctrl + C ISSUES
    When pressing ctrl + b to add strong tags to text.. Sometimes it doesn't work and I have to goto the keyboard tag editor.. close it again.. Then it works again. So there's a bug there.
    And sometimes ctrl + c copies as if it does (ctrl + a, ctrl + c) ... And it stops once again when I goto keyboard tags and close it again..
    Explain that hehe!
    All the best and thanks in advance.

    jimmyt1988 wrote:
     Any chance we can have an update to make the search and replace is faster. It's utterly useless at the moment.
    This is a user-to-user forum, so no one here can grant your request. File an enhancement request through the official form at http://www.adobe.com/cfusion/mmform/index.cfm?name=wishform.

  • How to replace all existed objects when impdp a schame ???

    I wrote 2 functions which do expdp and impdp
    the impdp funciton is :
    create or replace function impdp_schema(fromusr in varchar2,
                                            tousr   in varchar2,
                                            dir     in varchar2,
                                            dmpfile in varchar2,
                                            logfile in varchar2 default null)
      return number as
      h1        NUMBER;
      job_name  varchar2(128);
      job_state varchar2(32767);
      ret       number;
    BEGIN
      if dir is null or dmpfile is null then
        return 1;
      end if;
      ret := 0;
      job_name := 'IMP' || to_char(sysdate, 'yyyymmddhh24miss');
      h1 := DBMS_DATAPUMP.OPEN('IMPORT',
                               'SCHEMA',
                               NULL,
                               job_name,
                               'COMPATIBLE',
                               DBMS_DATAPUMP.KU$_COMPRESS_METADATA);
      DBMS_DATAPUMP.ADD_FILE(h1,
                             dmpfile,
                             dir,
                             null,
                             DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
      if logfile is not null then
        DBMS_DATAPUMP.ADD_FILE(h1,
                               logfile,
                               dir,
                               null,
                               DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
      end if;
      DBMS_DATAPUMP.METADATA_REMAP(h1,
                                   'REMAP_SCHEMA',
                                   UPPER(fromusr),
                                   UPPER(tousr));
      DBMS_DATAPUMP.set_parameter(h1, 'TABLE_EXISTS_ACTION', 'REPLACE');
      DBMS_DATAPUMP.START_JOB(h1);
      dbms_datapump.wait_for_job(h1, job_state);
      dbms_datapump.detach(h1);
      dbms_output.put_line(job_state);
      return ret;
    exception
      when others then
        dbms_output.put_line(SQLERRM);
        dbms_datapump.detach(h1);
        return 1;
    END;------------------------------------
    when I test this function, I found some errors :
    Master table "SYS"."IMP20100823152536" successfully loaded/unloaded
    Starting "SYS"."IMP20100823152536": 
    Processing object type SCHEMA_EXPORT/USER
    ORA-31684: Object type USER:"WIS00001" already exists
    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
    Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
    ORA-31684: Object type SEQUENCE:"WIS00001"."SEQ_ID_WIS_LOG" already exists
    ORA-31684: Object type SEQUENCE:"WIS00001"."SEQ_TID_TABLENAME" already exists
    ORA-31684: Object type SEQUENCE:"WIS00001"."SEQ_WIS_MASS" already exists
    ORA-31684: Object type SEQUENCE:"WIS00001"."SEQ_WITSDELAY_1" already exists
    ORA-31684: Object type SEQUENCE:"WIS00001"."SEQ_WITSDELAY_2" already exists
    ORA-31684: Object type SEQUENCE:"WIS00001"."SEQ_WITSDELAY_3" already exists
    ORA-31684: Object type SEQUENCE:"WIS00001"."SEQ_WITSEDIT" already exists
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    . . imported "WIS00001"."WITS_98_PIC"                    28.28 KB      97 rows
    . . imported "WIS00001"."DRAWING_CONTROL"                26.94 KB     218 rows
    Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
    ORA-31684: Object type PROCEDURE:"WIS00001"."PROC_GETITEM" already exists
    Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
    Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
    **Job "SYS"."IMP20100823152536" completed with 9 error(s) at 15:30:12**---------------
    how to replace all objects when exist ??? include sequence, table, procedure/function???
    </pre>
    Edited by: UNISTD on 2010-8-23 上午12:50

    I love posts without a 4 digit version number. This may well be because of a bug, and as this post is lacking a 4 digit version number, nobody will be able to answer it.
    You posted in vain.
    I recommend you search on Metalink whether there are issues with this parameter.
    Sybrand Bakker
    Senior Oracle DBA

Maybe you are looking for

  • Have you installed RSP yet?

    Hi All, For those of you who do not already know about Remote Support Platform for SAP Business One I suggest you take some time to read up on this great free (with active maintenance contract) solution from SAP. Details can be found here RSP has bee

  • How do I make a movie clip play between scenes?

    I am VERYnew to flash so please bear with me. I am having a problem with a movie clip playing between scenes. As scene 1 plays, the movie works correctly. When I click on my button to go to scene 2 the movie clip starts over. I would like it to conti

  • Unable to make the Client Copy

    Hi Team ,.              I am trying to make a client Copy after SCCL transaction is performed when we check the status in SCC3 it's throwing an error Error : Client 200 logon locked Client Copy probably cancelled The Client could be the source client

  • Problems with iMessage

    My messages will not sync to my iPhone 4s (no service)  after I set up FaceTime. It was working perfectly fine before that. And now when I go under the "Send and Receive" tab, my phone number is not able to be checked/ unchecked. Please help.

  • Error pasting paths from Illustrator or Photoshop

    Hi, I am expetimenting trouble when copying and pasting any path from illustrator or photoshop to Indesign. It says :''Illustrator import failed. Scrap Containes no visible objects". I have tried all the thing you have suggested previously. This is h