Achieving Autofailover between Branches and HQ site using OSPF

Hi there,
I have a number of Branches and ATMs which connect to the HQ via GRE tunnels through L2MPLS of the service provdiers network.
Recently I commisioned a DR site that I would like all the branches and ATMs to point to incase of disaster.
Most importantly I am supposed to achieve an auto-failover solution between Branches and ATMs towards HQ, @ATM and branch has duo links from different providers for resiliency.
The standard I am supposed to use is OSPF between branches and HQ, where we have GRE tunnels running in between, is there anyone who can assist me on how to achieve auto-failover solution between the Branches and HQ using OSPF on the existing GRE tunnels.
Sample configuration would really help
Thanks.

What you are asking for here is a full blown network design. It is more than just a few configuration commands.
We can point you in the right direction but we cannot do the entire thing for you.
We would need to know things like is there a direct link between HQ and DR, how many branches, is OSPF already in use, if so what areas do you have, are you proposing to use the same IPs at the DR site  etc etc.
But before all that have you thought about how the applications would work ?
Presumably you have applications that run on servers at HQ. How do you sync this information to the DR site servers ?
So a couple of scenarios -
1) the link at HQ fails and all sites automatically switch to DR. Then 10 minutes later the link comes back up so all sites switch back to HQ.
How are you going to make sure that any data written to servers in DR is now replicated to the HQ servers in real time.
2) a branch primary link fails. It switches to DR but all the other branches are still going to HQ.
Again how you are going to ensure the data remains consistent between the HQ and DR servers as you now have two active sites.
Routing protocols are very good at automatically providing failover but they don't understand the applications.
The hard part with DR is not the network, although that in itself can be challenging, but how the applications are going to work.
So if you only want to invoke DR if there is a major outage at your HQ sites which could last for days for example then using a dynamic routing protocol could create more problems than it would solve.
You may not have applications that need to be kept in sync so it may not be an issue for you.
But even then what you are asking for is not trivial, DR never is.
Perhaps you can clarify exactly how it is meant to work otherwise we cannot really point you in the right direction.
Jon

Similar Messages

  • Connect as sysdba between Linux and Windows without using password

    hello
    Hello
    I need to connect as sysdba between Linux and Windows without using password for the sys user
    Sqlplus /@string_connection
    Plz help me

    Duplicate post:
    Connect as sysdba between Windows and Linux
    Actually you have been given the answer in your above thread. You need to read the Oracle documentation. Search password file at tahiti.oracle.com
    regards

  • SQL Database replication between Primary and DR site

    Hi,
    We are setting up a DR Site, for our Production SAP system.
    At present the curren setup for Production SAP system are as follows.
    SAP ERP 2005 (SAP ECC 6.), ABAP+JAVA on Windows with MS SQL Server
    2005.
    Data are stored on SAN and SAP installed on MSCS Cluster enviornment.
    For Setting up the DR site, we are setting up Central System
    installation and replication will be configured between SAP
    Production/Primary system and DR site.
    We are shoadwing at our test setup and facing below replication issues
    while setting up transactional replication in MS SQL server 2005.
    1. At present 78716 articles are available on Production Database, and
    SQL fails the replication publication after 59000 articles.
    We open the ticket with Microsoft and below are the suggestion provided
    by them.
    1. Microsfot suggest to prepare 2\3 publications for total number of
    tables, 1 publication for all Views and 1 publication for SPs,
    Functions and Data types with single Distributor Agent. Distribution Agent will
    take care of propagating changes in the tables across the publications.
    2. All related tables (Parent child relationship) to be published on
    one publication.
    3. It would recommend to create 4 publications: 2 for the tables, 1 for
    the views and the other for the stored procedures. This would ensure
    better manageability.
    The main concerns here is about splitting the tables into two different
    sets with about 30,000 in one of the publications and the rest in the
    second publication.
    You will have to make sure that all the dependant tables are included
    in the same publication. Since this database is a SAP database, requesting to  provide information as follows.
    1. Identify and split the entire set of table into two/three groups.
    2. How to publish dynamic tables (The tables which get created post
    publication process).
    Regards,
    JP

    Hi,
    for database mirroring and log shipping there a several very good whitepapers and other documents out there:
    [http://download.microsoft.com/download/d/9/4/d948f981-926e-40fa-a026-5bfcf076d9b9/SAP_SQL2005_Best%20Practices.doc]
    [http://blogs.msdn.com/saponsqlserver/archive/2007/09/26/what-did-we-learn-using-database-mirroring-over-the-last-two-years-in-our-sap-erp-system-second-revision.aspx]
    [http://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?EventID=1032297524&EventCategory=5&culture=en-US&CountryCode=US]
    [http://sqlcat.com/whitepapers/archive/2007/11/19/database-mirroring-best-practices-and-performance-considerations.aspx]
    Furthermore see note 965908 for this.
    The needed bandwidth depends on the amount of log you create in a given timeframe. Internet bandwidth can be suffient, if you don't generate much log (btw, i don't understand the term RTD is 220).
    In the log shipping scenario you can run the DR site in standby mode, then a readonly access (as DBCC etc.) is possible.
    Regards
      Clas Hortien

  • UCM - How to setup synchronize between DR and Primary site

    Hi all.
    As mentioned on title, we have a primary UCM site and a clean DR site. I want to ensure that end-users have ability to work with DR site for a short time when the Primary site is unavailable. To make DR site available to serves when the primary is down, we can do:
    - setup auto-export archive on primary site
    - target to destination archive on DR site
    - auto-transfer from primary to DR site
    - with data in Database, we can use Golden Gate to sync Primary and DR site
    So, with these settings, I can ensure that the DR is ready to run when the Primary is down. But, if the primary takes a long time to recovered, the DR site has many new contents. How to transfer it back to primary site when the primary is came back ? In other words, how to synchronize contents (vault and native files) between new Primary (old DR) and new DR (old Primary) site ?
    Thank for your attention.
    Sorry for my bad English.
    Cuong Pham

    Hi Cuong (and guys),
    I'm afraid the issue is not that simple. In fact, I think that the Archiver could be used for DR only by customers who have little data and few changes. Why do I think so?
    a) (Understanding System Migration and Archiving - 11g Release 1 (11.1.1)) "Archiver: A Java applet for transferring and reorganizing Content Server files and information." This means that you will use a Java applet to Export and Import your data. With a lot of items (you will need to transfer all the new and updated items!), or large items it will take time (your DR site will always be "few minutes late"). Besides, the Archiver transfers are based on batches - I don't think you can do continuous archiving - and will have impacts on the performance.
    b) Furthermore, (Exporting Data in Archives - 11g Release 1 (11.1.1)) "You can export revisions that are in the following status: RELEASED, DONE, EXPIRED, and GENWWW. You cannot export revisions that are in an active workflow (REVIEW, EDIT, or PENDING status) or that are DELETED." This means that the Archiver cannot be used for all your items.
    Therefore, together with FMW DR Guide (Recommendations for Fusion Middleware Components) I think other techniques should be considered:
    - Real Application Clusters (RAC), Weblogic Clustering, cluster-ware file system: the first, almost error-free, and relatively cheap option is having your DR site as other nodes in DB and MW clusters. If any of your node goes down, the other(s) will still serve your customers (no extra work needed), plus, you can benefit from "united power" of multiple nodes. RAC is available also in Oracle DB Standard Edition (for max. 2-nodes db cluster). The only disadvantage of this configuration is that it is not available for geo-clustering (distance between RAC nodes must be max. some hundreds meters), so it does not cover DR scenarios like "location goes down" (e.g. due to networking issues)
    - Data Guard and distributed file system: the option mentioned in the guide is actually this one. It is based on Data Guard, a free option of the Oracle Database Enterprise Edition, which can run in both asynchronous (a committed transaction on the primary site is immediately transferred to the DR site) and synchronous (a transaction is not committed on the primary until processed by the DR site - if sites are far, or a lot of data is being sent, this can take quite long) modes. So, if you store your content in the database the Data Guard can resolve a lot. Unfortunately, not everything - the guide also mentions that some artifacts (that change!) are also stored on the file system (again, workflow updates, etc), so you have to use file system sync techniques to send those updates. In theory, you could use file system to send also updates in the database, which is nothing but a file(s) (in this case you will need the Partitioning option to split your database into smaller files), but db guys hate this way since it transfers also inconsistencies, so you could end up with an inconsistent database in the DR site, too.
    This option will require some administrative tasks - you will have to resolve inconsistencies resulting from DG/file system sync, you will need to redirect your users to the DR site, and re-configure the DG to make primary site from your DR one. Note that once your original primary site is up again, you can use DG to transfer (again, immediately) changes done in the meantime.
    As you can see, there is no absolute solution, so you need to evaluate your options, esp. with regards to your needs.
    Jiri

  • Speed between server and client when using FML

    I using FML between server and client, the server access oracle using only 5 ms, but when tranfer back to client, it about 100ms long. I tuned my hp-ux 11 kernel and tuxedo config file, but useless, Why?

    Hi Tumecan,
    Your expected information available here, check.
    Frontend Network Load - Network Integration Guide (BC-NET) - SAP Library
    few related SAP Notes:
    164102 - Network load between application server and front end
    500235 - Network Diagnosis with NIPING
    62418 - Network Load of SAPGUI Frontend Communication
    679918 - The front-end network time
    578118 - Long response times on the SAP GUI
    161053 - Using SAP GUI in WAN
    Regards,
    V Srinivasan

  • SCCM central site and primary site use the same SQL SERVER with two Instance.

    Hi  Guys,
    I want deploy SCCM 2012 central site and primary site in my domain. But Only one Sql server for me. Any one can tell me how to install the central site server and primary site server with the same SQL SERVER with two instance.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.
    Sean Xiao
    TechNet Community Support

    Although you can install like the configuration you said above, we do not recommend you do it this way. If your SQL box has  problems, all the data will go away and you will not have data redundancy.
    You need to configure the different SQL Port and SQL Broke service port e.g.
    SQL port 4023  SQL Broke Service port 4022 for CAS instance
    SQL port 4024  SQL Broke Service port 4021 for PRI instance
    Juke Chou
    TechNet Community Support
    I agree with Johan and this configuration should not be used. But I want to clarify that the default ports for "SQL port" (actually, SQL over TCP) is 1433 and the SQL Broker Service uses 4022. The configuration above should work but the "correct" would be
    to use 1433 and 4022 for the CAS and 10434 and 4023 for the Primary :)
    You can read more about Network Ports used by Configuration Manager here
    http://technet.microsoft.com/en-us/library/hh427328.aspx#BKMK_CommunicationPorts
    /Tim
    Tim Nilimaa | Blog: http://infoworks.tv | Twitter: @timnilimaa

  • Interfacin​g between labview and microcontr​oller using rs 232

    hi my task is to generate a wavefrom of certain frequency and certain duty cycle in labview and feed it to pic18f452 microcontroller and the o/p of microcontroller when connected to CRO should display the same frequency and duty cycle which was fed from labview.
    for that when i am sending decimal data from labview microcontrolle is receiving ascii data for eg 62(decimal) gives 36 and 32 respectively in microcontroller registers and when i use type cast function and convert the decimal data into ascii data and then sending it to uc it gives hex o/p so i am confused that how to solve this problem
    pls guide me
    thanks
    satish

    So your display from the uc is giving you a hex display instead of the ASCII representation? Are you using VISA to send a string to the uc? Is that string being represented as hex instead of ASCII by the microcontroller? I'm confused about what exactly your setup and problem is...

  • Integration between PS and MS project using open PS

    Hello all
    We have a requirement to interface MS Project with SAP PS using open PS that is a standard SAP provided interface.Since i dint work on this before please help me in answering the below questions:These are the clients requirement.
    1. Methodology of installing this interface.
    2. Projects created in MS Project should create a project in SAP R3 and vice versa.
    3. Projects changed in MS Projects should go and update projects in SAP R3 and vice versa.
    4. Selective data transfer from MS project to SAP R3, for example we want to update only a portion of the project.
    5. How easy it is to download MS Project, project to SAP and vice versa. I remember in olden days
    downloading from SAP R3 was easy where-as uploading was pain. I am sure SAP would have addressed it.
    6 Any other information that you think would help us understand this interface better from installation,
    usability and maintenance, BAPI perspective.
    Thanks

    hi,
    kindly go through the link..it was and is still useful in regard to open ps :
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/5037d0e8-8bc2-2a10-6fb3-c4833289bb66
    Regards,
    Ajit

  • Flexconnect, branch and central site have same VLAN's

    Is anyone familiar with a flexconnect deployment where on the central and branch site the same VLAN's are in use?
    On both sites the following VLAN's are in place:
    VLAN 32 = BYOD
    VLAN 31 = USER
    VLAN 40 = VOICE
    On the branch site I want to deploy Flexconnect. When creating the VLAN mapping in the AP configuration all the VLAN's are instantly assigned. For local branch DHCP ip-helper addresses are configured on the branch switch. When a client connects to the Flexconnect AP it doesn't get an IP address. Suggestions?

    Hi Thomas,
    On the WLC location, your clients get IP? How did you setup the DHCP Server: on interface level or DHCP Override?
    For the FlexConnect sites:
     - enable Vlan Support?
     - specify Native Vlan for the AP mgmt Vlan
     - add Vlan Mapping: Wlan to sites's Vlan
     - finally: configure accordingly the switchport:
     switchport mode trunk
     switchport trunk native vlan ...
     switchport trunk allowed vlan all

  • Is it possible to move content between private and public sites after publishing the latter?

    Hi. Is anyone else in this position? My institution kept its original iTunes U site after migrating our public content to the new public site last year. We now have separate public and private (university log-in only) sites and must maintain both. The Public Site Administrator allowed the copying of existing content from the original iTunes U site to the new public site, but ONLY prior to publication of the public site. I have two related questions for the community:
    1) How is it possible to maintain a workflow in which faculty create a course for the private iTunes U site (wanting to keep the content restricted to our students while the course is live), but then move the content over to the public site once the semester ends?
    2) With the new app for Courses and the Course Manager on the public site, what do we do with the courses already created by faculty as Collections (either in the public or private sites)?
    I hope there are folks out there that can help.
    Thanks!
    Kevin

    I'd like to see a discussion on this exact topic as well. Have you determined yet how to move a private course (apple hosted) to the public site? Can you use an RSS feed from the private?
    joe

  • Sharing code between Flex and AIR versions using library project

    Hello everyone,
    I'm developing an application that has both Flex and AIR versions. In order to share code between these apps, I created a library project and added all my code there. Now I've set the library project as a dependency for both Flex and AIR projects. Since there are some components that use the DataService object, I've added fds.swc and fds_rb.swc and fiber_rb.swc modules to the libs directory of the library project. No compile errors. Now, if I try to run my Flex application, I'm getting this error:
    Variable mx.data::LocalStoreFactory is not defined.
    I know that this error comes up when playerfds.swc is not present in the path. But that is not the case here. I have added playerfds.swc, fds.swc and related lib files to the build path.
    If I go back and add the playerfds.swc file to the original library project, the error no longer appears. This is not a proper solution for me, since I need to share this project with AIR version also, and I cannot have both playerfds.swc and airfds.swc in the same project.. Has anyone faced an issue like this before?? What am I doing wrong??

    Hello everyone,
    I'm developing an application that has both Flex and AIR versions. In order to share code between these apps, I created a library project and added all my code there. Now I've set the library project as a dependency for both Flex and AIR projects. Since there are some components that use the DataService object, I've added fds.swc and fds_rb.swc and fiber_rb.swc modules to the libs directory of the library project. No compile errors. Now, if I try to run my Flex application, I'm getting this error:
    Variable mx.data::LocalStoreFactory is not defined.
    I know that this error comes up when playerfds.swc is not present in the path. But that is not the case here. I have added playerfds.swc, fds.swc and related lib files to the build path.
    If I go back and add the playerfds.swc file to the original library project, the error no longer appears. This is not a proper solution for me, since I need to share this project with AIR version also, and I cannot have both playerfds.swc and airfds.swc in the same project.. Has anyone faced an issue like this before?? What am I doing wrong??

  • Get data from Sharepoint site in different farm using webservice, and current site using claims mode authentication.

    Hi , 
    I have a site that runs on   Claims Mode (  NTLM) . That site has a web part that needs to show the data from any  sharepoint farm, SharePoitn  2010 , or 2007 or 2003.
    I am getting 401 unauthorized when I   try to get data from webservice running in sharePoint context.
    But when I run the same code in  Windows Console application  then it is giving no error.
    What I doubt is that this issue is due to the reason that I have  set
    claims mode authentication.
    Because same code is running  in different farm in a site that is configured  using windows authentication.

    So generally speaking, you're talking about a VERY long running topic of authentication methods... and generally speaking, in the world of Windows, the only long running authentication options have been:
    - NTLM (limited to one hop)
    - Kerberos (unlimited hops)
    - Application level authentication (ex: SQL auth, also, no hops)
    Recently, Microsoft has been investing in Claims Based Auth... and I fully expect claims to start working their way into other systems (previously starting to work into Windows via the CardSpace technology, but also in other ways such as Win8's ability to
    log in with a LiveID)... but building a new authentication method into ALL applications is a VERY long process, and the complexity of claims adds a LOT of consideration (claims from the same AD can look VERY different depending on the STS, so lots of questions
    around things like bridging claims).
    So as far your SP auth needs...
    IF both applications are CLAIMS AWARE, then you MAY be able to use claims (which support unlimited hops)... but that's just not very likely yet (and will probably take another 5-10 years to be available across the entire enterprise)... otherwise, KERBEROS
    Outside of the Microsoft world... KERBEROS is open spec, so is supported by other systems (such as authenticating to linux)... claims based auth is also open spec, but again, still new... there are a few other options (LDAP, etc), but none that are native
    to Windows (so you rely on things like third party auth modules for Windows, which Novell has done for DECADES with NDS and eDir)
    And again, SharePoint can convert claims to Kerberos using the C2WTS... which MS uses internally for things Excel Services connecting to a backend SQL server... but it DOES require the Kerb and C2WTS configuration.
    if you're having issues using Kerb auth... then it sounds like Kerb isn't configured correctly... and Kerb is a PAIN to configure (whitepaper for SP Kerb is ~100 pages long)... IIS (and SharePoint) also has the added benefit of failing over to NTLM if Kerb
    fails (and you shouldn't disable this behavior, since it'll break other settings elsewhere)
    Scott Brickey
    MCTS, MCPD, MCITP
    www.sbrickey.com
    Strategic Data Systems - for all your SharePoint needs

  • Problems with QoS between 2950 and 3550 with use of Native VLAN

    Hi!
    I try to set up QoS between a C2950 and a C3550, I have provided a drawing that might help understanding the setup.
    As I understand it, since I only have the SMI image on the C2950 I have to run a 802.1Q trunk over the leased 2Mb line to get QoS to work. And I DO get it to work, or at least it seems so to me.
    What I'm trying to setup QoS on is between a Nortel Succession Media Server and a Nortel i2004 IP Phone.
    And when I sniff the port that the Succession Media Server is connected to, I get this output:
    *BEGIN*
    *** TO IP Phone ***
    IP version: 0x04 (4)
    Header length: 0x05 (5) - 20 bytes
    Type of service: 0xB8 (184)
    Precedence: 101 - CRITIC/ECP
    Delay: 1 - Low delay
    Throughput: 1 - High throughput
    Reliability: 0 - Normal reliability
    Total length: 0x00C8 (200)
    ID: 0x5FE1 (24545)
    Flags
    Don't fragment bit: 0 - May fragment
    More fragments bit: 0 - Last fragment
    Fragment offset: 0x0000 (0)
    Time to live: 0x40 (64)
    Protocol: 0x11 (17) - UDP
    Checksum: 0x69EC (27116) - correct
    Source IP: 10.40.2.10
    Destination IP: 10.10.153.100
    IP Options: None
    UDP
    Source port: 5216
    Destination port: 5200
    Length: 0x00B4 (180)
    Checksum: 0x5C02 (23554) - correct
    *** FROM IP Phone ***
    IP version: 0x04 (4)
    Header length: 0x05 (5) - 20 bytes
    Type of service: 0xB8 (184)
    Precedence: 101 - CRITIC/ECP
    Delay: 1 - Low delay
    Throughput: 1 - High throughput
    Reliability: 0 - Normal reliability
    Total length: 0x00C8 (200)
    ID: 0x8285 (33413)
    Flags
    Don't fragment bit: 0 - May fragment
    More fragments bit: 0 - Last fragment
    Fragment offset: 0x0000 (0)
    Time to live: 0x7F (127)
    Protocol: 0x11 (17) - UDP
    Checksum: 0x0848 (2120) - correct
    Source IP: 10.10.153.100
    Destination IP: 10.40.2.10
    IP Options: None
    UDP
    Source port: 5200
    Destination port: 5216
    Length: 0x00B4 (180)
    Checksum: 0x5631 (22065) - correct
    *END*
    But, then to the problem:
    Since the modems I use have ip adresses in them I want to monitor them and be able to change settings in them.
    But to connect to units within the trunk, I have to set the native vlan to VLAN 144, which provides the ip adresses I use for the modems, in both ends of the trunk.
    But if I do that the tagging of the packets from the IP Phone disappears!
    Here's an output after native VLAN is applied:
    *BEGIN*
    *** TO IP Phone ***
    IP version: 0x04 (4)
    Header length: 0x05 (5) - 20 bytes
    Type of service: 0xB8 (184)
    Precedence: 101 - CRITIC/ECP
    Delay: 1 - Low delay
    Throughput: 1 - High throughput
    Reliability: 0 - Normal reliability
    Total length: 0x00C8 (200)
    ID: 0xDEF8 (57080)
    Flags
    Don't fragment bit: 0 - May fragment
    More fragments bit: 0 - Last fragment
    Fragment offset: 0x0000 (0)
    Time to live: 0x40 (64)
    Protocol: 0x11 (17) - UDP
    Checksum: 0xEAD4 (60116) - correct
    Source IP: 10.40.2.10
    Destination IP: 10.10.153.100
    IP Options: None
    UDP
    Source port: 5240
    Destination port: 5200
    Length: 0x00B4 (180)
    *** FROM IP Phone ***
    IP version: 0x04 (4)
    Header length: 0x05 (5) - 20 bytes
    Type of service: 0x00 (0)
    Precedence: 000 - Routine
    Delay: 0 - Normal delay
    Throughput: 0 - Normal throughput
    Reliability: 0 - Normal reliability
    Total length: 0x00C8 (200)
    ID: 0x89E4 (35300)
    Flags
    Don't fragment bit: 0 - May fragment
    More fragments bit: 0 - Last fragment
    Fragment offset: 0x0000 (0)
    Time to live: 0x7F (127)
    Protocol: 0x11 (17) - UDP
    Checksum: 0x01A1 (417) - correct
    Source IP: 10.10.153.100
    Destination IP: 10.40.2.10
    IP Options: None
    UDP
    Source port: 5200
    Destination port: 5240
    Length: 0x00B4 (180)
    Checksum: 0x31CA (12746) - correct
    *END*
    See, there is noe QoS tagging from the IP Phone anymore.
    If I set no switchport trunk native vlan 144 in both ends the tagging is back.
    Any ideas? Is this a bug, or just some command I don't know about?
    Please take a look at the picture to get a more understandable view of the setup.
    Thanks!

    Well, native VLANs are by definition untagged so there´s nothing wrong with that as far as you are getting the expected results. By the other way I think you should include VLAN 402 on your allowed vlan range on Catalyst 3550's FastEth0/45 trunk port, otherwise this VLAN will be completly isolated from the rest of the network.

  • Difference between .mac and private site uploads

    My .mac site is perfect but the same site, when published to a folder and ftp'd to my private domain, has strange problems:
    1. I use the the blog template for art reviews. Each review works fine on the .mac site but when I move around on the private domain site I get a 404 error. By moving around I mean using the "previous" and "next" buttons.
    2. Some of the links from the home page to one of the review pages hasn't worked so I've had to take them off. But they worked on the .mac site and not on the private domain one.
    3. I use the blog template for multiple art reviews, slide shows and press releases - and have the same problem no matter how I do it. And I've tried redoing it over and over and over.
    Take a peek and you'll see.
    Try the "next" button and see what happens on each of these sites:
    http://www.theartofrwfirestone.com/Reviews/Entries/2007/4/3at_the_Walter_Wickiser_Gallery%2C_NY%2CNY.html
    http://web.mac.com/franktobe/TheArt_of_R.W._Firestone/Reviews/Entries/2007/4/3_at_the_Walter_Wickiser_Gallery%2C _NY%2CNY.html

    Hey, I understand why you asked why I deleted the public_html folder. I've been trying everything because NOTHING works. But I've got everything back in place and now my public_html folder has two items: an index.html file and the folder with all my pages. When I go to that folder on my desktop and click on the index file and then go into my REVIEWS (blog template) section, all the pages and links work.
    BUT, when I do it online, I get NOT FOUND errors like this one. Thus my frustration.
    Not Found
    The requested URL /RWFSite/Reviews/Entries/2007/3/5“Feelings”_Exhibition_at_Wickiser_Gallery,NYC.html was not found on this server.
    Could it be that the page name is the title and it has special characters?
    Could it be that the page name is too long?
    Could it be that iWeb is not the right product for me?
    Also, when this is all fixed, then I have to go into every html page and add keywords and description meta tags because iWeb doesn't seem to enable this feature.
    Thanks for any and all help, suggestions or comments.

  • Share files between Mac and win XP using airport ekspress

    Hi.
    I am having trouble sharing files between my mac and my win XP pc. The home network is set up like this:
    - The PC is connected to our router
    - The Airport ekspress is connected to our router
    - The MacBook is connected to the Airport Ekspress
    After a long time it is now possible for me to see my mac from the XP pc, but i can't connect to my Mac and I can't see the XP fra my Mac.
    Antone who can tell me how to fix this very annoying problem? Are there any guides how to set up the network "correct"?
    Hope you can help
    Message was edited by: Kasper Soerensen

    Hi Eric and welcome to Discussions and the Apple world.
    Mac OSX can read and write from Windows partitions (like the BootCamp Windows partition you are about to create) when using FAT32 as file system for Windows.
    However with FAT32 you are limited to a partition size of 32GB.
    Mac OSX can also read from Windows partitions that uses the NTFS file system, but it can not write to them unless you use a third-party helper like either Paragons NTFS for Mac http://www.paragon-software.com/home/ntfs-mac/ or NTFS-3G http://www.ntfs-3g.org/
    Windows can not even see or use a Mac OSX partition without additional help by MacDrive http://www.mediafour.com/products/macdrive/
    Regards
    Stefan

Maybe you are looking for

  • Wife and I share itunes account.  How do we save her info and start her own account?

    When we first got our iphones, we were told at apple store we could use the same itunes account.  We still do.  Now with 2 iphones, 2 laptops, ipads, etc.  it is a mess with facetime and sharing apps. How do I save her information and start a new acc

  • Can I send Apple TV 2 audio to Airport Express

    I can't seem to find an answer for this respect to the ATV2.  Everything I read talks about using Airplay to stream from iTunes on a computer to either the Apple TV or to an Airport Express.  However, what I'd like to do is have the audio from my App

  • IPhone TV out w/ iTV Link Monster Cable

    Purchased the Monster Cable iTV Link, how do I set my iPhone to use this for watching my movies on my TV? I cannot find any info in the user manual. Please help.

  • Updates for nokia 5530

    hello sir iam using nokia 5530 mobile and i updated my mobile sucessfully (V31.) and iam eagerly waiting for the next updates from june still the updates are not available... i request you to issue the latest updates as that of the nokia 5800 which i

  • Tab Canvas controlled programmatically

    Hi, I have tab canvas which had tab1 and tab2. Tab1 have master data and tab2 has detail information . I have Detail button in tab1 ... if i click the Detail button in when-button-pressed Go_block('Detail_blk'); control goes to the new block, but the