Distributed on a PC

From: ESDS01:: "MAIL-11 Daemon" 16-DEC-1996 18:18:02.76
To: [email protected]
CC:
Subj: forte-users-digest V1 #169
forte-users-digest Monday, 16 December 1996 Volume 01 : Number 169
From: "Russell Francis" <[email protected]>
Date: Mon, 16 Dec 1996 20:00:32 -0000
Subject: Interface to Business Objects
Hi all,
We have a client who plans to use Business Objects as their adhoc reporting
tool to accompany their Forte' application. There are a number a subsystems
within their Forte' application that will contain fixed reports.
We are considering doing all their reports in Business Objects. This would
mean that the interface to start the report and provide any parameters
required for the report would be part of the Forte' Application. The actual
printing and previewing would be done by Business Objects using a
predefined report, invisible to the user.
Has anybody tried this before?
Any information would be appreciated,
Regards,
Russell Francis.
/ __ \ X EXIX Limited
| ______| I Excellence In
\ \_____ Corporate Software
\______/ X Solutions
Russell Francis - Forte' Consultant
[email protected]
Tel: + 44 468 94 60 60
George Nowakowski <[email protected]> wrote:
<snip!>
Can anyone please give me some hints as to how I might get Forte
Distributed to work on a single PC? <snip!>
What you need to do is start your PC as a server. Unfortunately you
can't (according to page 87 of the system management guide) run a
Macintosh or PC/Windows machine as a Forte server. Assuming that by "PC"
you mean a Wintel machine, I see a couple alternatives:
* Run NT. Forte supports NT as a server on an Intel platform, but you'll
want a serious amount of memory (80 meg was enough to serve 4
developers here), and probably a serious CPU as well. Our
experience was on an NT server. I have NOT tried to make NT
client into a Forte server, but given all the heat in
comp.sys.whatever.nt.system about the lack of differences "under
the hood" between NT server and NT client, I suspect it might
work. You'll need to install TCP/IP (which doesn't happen by
default).
* Hack around with the windows kit. Judging by my Macintosh, Forte
provides all the pieces; maybe you can persuate Windows to be a
server.
Tom Wyant The opinions expressed in this document are
[email protected] those of the author, except for the
[email protected] following, which is William Shakespeare's:
"The first thing we do is, we'll kill
all the lawyers."
Henry VI, Part II, Act 4, Scene 2

I have found it possible to do "stand-alone" development without a node
manager or environment manager (under both Windows NT 3.51 & 4.0) by:
1) Creating a shadow repository
2) Detaching the shadow repository
3) Running Forte stand-alone against the shadow (switched using Forte
Control Panel)
All application resources must be installed & running local (ODBC connection
to databases, etc...)
-- Greg
=======================
[email protected]
www.sagesoln.com
415.329.7243 x 305
=======================
George,
There are a couple of ways you can do testing on a single PC.
1. If you have NT 3.5.1 or 4.0 on your PC, you can run the
environment manager (nodemgr -e YourEnvName). With an environment
manager running, you can then run Forte Distributed, partition, and
run distributed which will start server partitions on the NT box, etc.
2. If you have Win 3.1 or Win 95, then you cannot run an environment
manager or server partitions, but you can still connect to an ODBC database.
Setting up an ODBC resource manager is very straight-forward. You do some
of the work in the ODBC control panel (ie. mapping a name to a particular
data source such as Access file, flat file, etc.). You then run Forte
Standalone which will automatically use ODBC for any DBSessions or
DBResourceMgr
service objects that need to get created by a run. You don't need to
change your service object to point to an ODBC Resource Mgr, the system
does it automagically.
Hope this helps,
Bobby

Similar Messages

  • DNS set up when not distributing dns

    Ok it's not clear but let me try, I am in a place where they give me an ip address and my domain names come from godaddy and are directed from there. sorry I'm french. Here's my question,
    Do I need to setup DNS on leopard server if I want to use all the services, open dir, qtss , podcast producer ect. or since I don't distribute dns, can I not use the dns service.
    I'm not sure I expressed my self properly so ask questions if you need to know more.

    Do I need to setup DNS on leopard server if I want to use all the services, open dir, qtss , podcast producer ect. or since I don't distribute dns, can I not use the dns service.
    If you want to run your own directory service for your clients then you SHOULD run your own DNS server. This is essential if you're setting up your server in a private-class network (e.g. 10.1.x.x or 192.168.x.x) since GoDaddy are not going to be able to resolve your internal hostname(s).
    The fact that no external users will ever query your server for DNS lookups doesn't matter - your own machine will and that's what counts.

  • How to populate the Cover Sheet of a Distributed Form

    I am a newbie to LiveCycle forms and have created my first simple form to manage travel requests within my company.  This is then distributed to employees who fill out their multiple travel requests with associated expected costs and return the form to the administrator.  This is in turn collected by the administrator who either approves or dissaproves some of the items in the form of each employee.  My question is: How can I get to see the total approved costs for all employees in this dataset?  IS there a way of consolidating this into the cover sheet or how wold one do this.  All advice welcomed.
    Thanks in advance.

    Each form that the employees fill out represents a separate request. One request does not know of the existance of other requests. If you want to get a consolidated view of all requests you will need a means of tracking or storing all of the information in a common place. Usually this is done by writing each form instance to a database, then you can get your consolidated view by querying the DB for that information.
    Hope that helps

  • How do you make Adobe Reader 9.3 truly Distributable?

    I have a set of electronic minutes in PDF format with hyperlinks between files, bookmarks and other nice navigational features.  I need to distribute these minutes on CD to my customer with a distributable version of Adobe Reader 9.3 so that they can view the files.  We use an autorun file that runs Reader off the CD and opens the main file.  I have downloaded Adobe Reader 9.3 from the "Distributable Version" (for CDs) link provided by Adobe when I requested a license to distribute Adobe Reader on my CDs.  My customer is not connected to the internet and has to jump through all sorts of hoops to install Adobe Reader on their PCs.
    My problem is that our PDF files on the CD do not navigate correctly between the different files we have while running the Adobe Reader installed on the CD.  If we run the files in Adobe Acrobat Professional 9.0 installed directly on the PC, they work beautifully.  If we run the files using Adobe Reader 9.2 or 9.3 installed directly on the PC, they work beautifully.  We have found that the Reader running off the CD will not correctly navigate these same files, unless Reader 9.2 or higher is actually installed on the PC running the CD.  This is a problem!  I'm guessing there are files installed elsewhere that are not contained on the CD version and thus the CD version of the Reader does not run correctly.
    Is there a way to force ALL the files required by Adobe Reader 9.3 to install into the same folder so that ALL the files can then be loaded onto a CD for distribution with our PDF files? 
    Please note:  We have been providing CDs with Adobe Acrobat on them to run our Minutes automatically for several years now.  So this used to work very well.  The newest distributable version of Adobe Reader is not Behaving like the older versions and appears to require some number of files not contained on the CD. 

    What problems are you encountering when you send it to the trash?

  • Error in Central Instance installation ERP 2005 Distributed System

    Hi All,
    I am currently trying to install Central Instance for ABAP Distributed system  on AIX.
    During the installation I encounter this error message:
    WARNING 2006-09-20 14:22:39
    Could not determine a valid JAVA_HOME directory from the environment.
    INFO 2006-09-20 14:22:39
    Creating file /tmp/sapinst_instdir/ERP/SYSTEM/ORA/DISTRIBUTED/AS-ABAP/CI/FormalPasswordCheck.log.
    ERROR 2006-09-20 14:22:39
    FSL-04008  Unable to access application bin/java. A file or directory in the path name does not exist.
    Before the installation SAPINST_JRE_HOME and JAVA_HOME variables were set to /usr/java14.
    Please help.
    Thanks,
    Chie

    Hi Chie,
    definately wrong
    the output should be similar to this one:
    java version "1.4.2"
    Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.2)
    Classic VM (build 1.4.2, J2RE 1.4.2 IBM AIX 5L for PowerPC (64 bit JVM) build caix64142sr1aifx-20051020 (SAP 142SR1a + 88494 + 84428 + 83602 + 89528 + 90372 + 88233 + 66827 + 92741 + 95636 + 96556 + 96581) (JIT enabled: jitc))
    note the "64 bit" and "SAP" in there...
    You can download it here:
    https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=javasap
    Regards,
    Pascal
    PS: please give points for helpful answers

  • How to Protect your Custom Access Database Product - so that you can sell & distribute it?

    I'm looking for an update on this topic as I have been away from Access for a couple of years and have not kept up with the latest.
    Hopefully they have made it easier to design, develop, sell and distribute custom database solutions. So here goes...
    Question A:
    If one develops a custom database product with Access 2013 what is the current best way to...
    1 - Prevent it from being (too easily) copied
    2 - Prevent it from being (too easily) reverse engineered
    3 - Prividing a time limited free demo copy?
    4 - Providing a demo copy with limited functionality... like limiting the number of records in an important table, or whatever?
    5 - What have I left out of this list that should be considered for protecting ones investment in the development of the product? (other than copyright, of course.)
    Question B:
    What is the latest on being able to migrate an Access database to the cloud?
    1 - Entirely online
    2 - Part in the cloud and part on the users machine
    3 - And what about all that VBA code - is there no way to make that work in the cloud and/or on a web server... or does it all have to be tossed and all the coding redone?
    Question C:
    What are other alternatives solutions for selling your custom database application while protecting all your investment in developing it?
    1 - Write the front end in C++  (so that it is fully compiled) and the back end in ASP with MS SQL Server? (or alternative server side solutions)
    2 - Write the whole thing as a server side solution with browser interface?
    3 - Or what?
    Thanks for any help.

    Hi Fran_3,
    >>What is the latest on being able to migrate an Access database to the cloud?
    In my option, the Access Web app would be a better choise.
    Regards,
    Tony
    Help each other

  • How do I make and distribute and image to multiple macbooks?

    Hello,
    I am the IT support person for a number of schools.  We use a mixture of Macs and PCs.
    I am looking for a nice tidy option to create an image of a "master" client and distribute to a group of student Macbooks.  All the Macbooks are the same model and are all running Snow Leopard (though they were released with Leopard).  Initially, one of the teachers created a user account (called student) with all the software and settings that we needed,  on one of them and used Migration Assistant to copy that account to the others.
    I prefer to have a master image I can restore to at any time.
    I figure that Disk Utility will do the job - I'm just not 100% sure of which steps to take.
    So far I have created my "master" macbook that is set up just the way I want it.
    Next I was going to attach it to the mac I use to run everything on (ARD etc) via firewire and boot the master in Target mode.
    Then I would run Disk Utility on my mac and use the New Image button to create an image called "studentPOD.dmg"  (There is room on my mac for this, or I can put it on an external USB disk)
    Is this correct, and once I have the image, how do I get it onto the other Macbooks?  Is there anything special I have to do to make it bootable?  They all part of a student POD and can be re-imaged at anytime without worrying about losing data...so I can afford to experiment...
    Any help to clarify would be much appreciated.
    Also, should I look for a Snow Leopard server to do this sort of thing?
    Many thanks
    Suzanne

    Making a Master Image on Snow Leopard - this is a bit long winded, sorry!
    Build your Mac the way you want it, all updates, account settings etc, this will be your master image so don’t personalise it too much.
    There are two ways to make an image with Disk Utility - they both use the Restore function.
    The ideal one is to boot from Mac OS X Install DVD and go to Disk Utility.
    The theoretically less ideal method is to boot from the master image itself and then go to Disk Utility.
    The DVD option is theoretically better as it involves copying the master image while it is unused as opposed to the second option which copies the master image while it is in use.  However, I have done this with the master image mounted and it works perfectly.
    If you use the DVD method then the downside is that the Destination image size has to be the same size or bigger than the Source image size, If you use the boot from the master image method, as long as there is enough space in the Destination image to fit the files (usually around 18gb in my images) then it works.  This is good because it means you can fit your master image on a small disk!
    (NB this is not the case with Lion, it will not let you use the master image as a source if you have booted from it)
    Ok, so you are in Disk Utility and you can see your Macintosh HD with your master image on it and you can see your USB drive with it’s empty image on it, this image has to be formatted as Mac OS Extended (Journalled).  It has to be at least as big as the used space on the Macintosh HD master image.
    Now you are going to use the Restore tab to restore the master image to the USB drive.
    Drag and drop the Macintosh HD master image into the Source box
    Click in the Destination box then drag and drop the USB image into it
    Make sure the Erase Destination box is ticked.
    Click Restore and when asked if you are sure click on Erase. (unless you are not sure of course!)
    It might take an hour to restore, once it is finished, boot from the USB image (which will now be named the same as your master image) and make sure it is what you were expecting, test it etc.  I usually rename it at this point to something other than Macintosh HD, this can be done by right clicking on the disk in finder and renaming it.
    Restoring from a disk image:
    On the Mac that you want to re-image with the master image:
    Plug in USB drive containing your master image (created above)
    Boot from Max OS X Install DVD and go to disk utility from the Utilities menu (you will need to go through the “use English as the main language bit” first.
    Disk Utility might take a while to find all the disks / images…
    Highlight Macintosh HD (or whatever the image name on the Mac you are about to re-image is)
    Click on the Restore tab
    Drag and drop the master image into the Source box.
    Click in the Destination box and then drag and drop the Macintosh HD image into it.
    You should have master image in the source box and Macintosh HD image in the destination box.  The erase destination box should be ticked.
    Check everything is how you want it and click on the Restore button.
    It will ask you if you’re sure – if you are then click Erase – if you’re not then click cancel and go play with a PC!
    It will tell you how long it’s going to take, mine take around 45 mins.
    Once it's finished boot from it and bob's your uncle!

  • Why is it only possible to run queries on a Distributed cache?

    I found by experiementation that if you put a NearCache (only for the benefit of its QueryMap functions) on top of a ReplicatedCache, it will throw a runtime exception saying that the query operations are not supported on the ReplicatedCache.
    I understand that the primary goal of the QueryMap interface is to be able to do large, distributed queries on the data across machines in the cluster. However, there are definitely situations where it is useful (such as in my application) to be able to run a local query on the cache to take advantage of the index APIs, etc, for your searches.

    Kris,
    I believe the only API that is currently not supported for ReplicatedCache(s) is "addIndex" and "removeIndex". The query methods "keySet(Filter)" and "entrySet(Filter, Comparator)" are fully implemented.
    The reason the index functionality was "pushed" out of 2.x timeframe was an assumption that ReplicatedCache would hold a not-too-big number of entries and since all the data is "local" to the querying JVM the performance of non-indexed iterator would be acceptable. We do, however, plan to fully support the index functionality for ReplicatedCache in our future releases.
    Unless I misunderstand your design, since the com.tangosol.net.NamedCache interface extends com.tangosol.util.QueryMap there is no reason to wrap the NamedCache created by the ReplicatedCache service (i.e. returned by CacheFactory.getReplicatedCache method) using the NearCache construct.
    Gene

  • Distributed Cache service stuck in Starting Provisioning

    Hello,
    I'm having problem with starting/stopping Distributed Cache service in one of the SharePoint 2013 farm servers. Initially, Distributed Cache was enabled in all the farm servers by default and it was running as a cluster. I wanted to remove it from all hosts
    but one (APP server) using below PowerShell commands, which worked fine.
    Stop-SPDistributedCacheServiceInstance -Graceful
    Remove-SPDistributedCacheServiceInstance
    But later I attempted to add the service back to two hosts (WFE servers) using below command and unfortunately one of them got stuck in the process. When I look at the Services on Server from Central Admin, the status says "Starting".
    Add-SPDistributedCacheServiceInstance
    Also, when I execute below script, the status says "Provisioning".
    Get-SPServiceInstance | ? {($_.service.tostring()) -eq "SPDistributedCacheService Name=AppFabricCachingService"} | select Server, Status
    I get "cacheHostInfo is null" error when I use "Stop-SPDistributedCacheServiceInstance -Graceful".
    I tried below script,
    $instanceName ="SPDistributedCacheService Name=AppFabricCachingService" 
    $serviceInstance = Get-SPServiceInstance | ? {($_.service.tostring()) -eq $instanceName -and ($_.server.name) -eq $env:computername}
    $serviceInstance.Unprovision()
    $serviceInstance.Delete()
    ,but it didn't work either, and I got below error.
    "SPDistributedCacheServiceInstance", could not be deleted because other objects depend on it.  Update all of these dependants to point to null or 
    different objects and retry this operation.  The dependant objects are as follows: 
    SPServiceInstanceJobDefinition Name=job-service-instance-{GUID}
    Has anyone come across this issue? I would appreciate any help.
    Thanks!

    Hi ,
    Are you able to ping the server that is already running Distributed Cache on this server? For example:
    ping WFE01
    As you are using more than one cache host in your server farm, you must configure the first cache host running the Distributed Cache service to allow Inbound ICMP (ICMPv4) traffic through the firewall.If an administrator removes the first cache host from
    the cluster which was configured to allow Inbound ICMP (ICMPv4) traffic through the firewall, you must configure the first server of the new cluster to allow Inbound ICMP (ICMPv4) traffic through the firewall. 
    You can create a rule to allow the incoming port.
    For more information, you can refer to the  blog:
    http://habaneroconsulting.com/insights/Distributed-Cache-Needs-Ping#.U4_nmPm1a3A
    Thanks,
    Eric
    Forum Support
    Please remember to mark the replies as answers
    if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]
    Eric Tao
    TechNet Community Support

  • How can i configure Distributed cache servers and front-end servers for Streamlined topology in share point 2013??

    my question is regarding SharePoint 2013 Farm topology. if i want go with Streamlined topology and having (2 distribute cache and Rm servers+ 2 front-end servers+ 2 batch-processing servers+ cluster sql server) then how distributed servers will
    be connecting to front end servers? Can i use windows 2012 NLB feature? if i use NLB and then do i need to install NLB to all distributed servers and front-end servers and split-out services? What will be the configuration regarding my scenario.
    Thanks in Advanced!

    For the Distributed Cache servers, you simply make them farm members (like any other SharePoint servers) and turn on the Distributed Cache service (while making sure it is disabled on all other farm members). Then, validate no other services (except for
    the Foundation Web service due to ease of solution management) is enabled on the DC servers and no end user requests or crawl requests are being routed to the DC servers. You do not need/use NLB for DC.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Foundation 2013 Farm and Distributed Cache settings

    We are on a 3 tier farm - 1 WFE + 1APP + 1SQL - have had many issues with AppFab and Dist Cache; and an additional issue with noderunner/Search Services.  Memory and CPU running very high.  Read that we shouldn't be running Search
    and Dist Cache in the same server, nor using a WFE as a cache host.  I don't have the budget to add another server in my environment. 
    I found an article (IderaWP_CachingFormSharePointPerformance.pdf) saying "To make use of SharePoint's caching capabilities requires a Server version of the platform." because it requires the publishing feature, which Foundation doesn't have. 
    So, I removed Distributed Cache (using Powershell) from my deployment and disabled the AppFab.  This resolved 90% of server errors but performance didn't improve. Now, not only I'm getting errors now on Central Admin. - expects Dist Cache
    - but I'm getting disk operations reading of 4000 ms.
    Questions:
    1) Should I enable AppFab and disable cache?
    2) Does Foundation support Dist Cache?  Do I need to run Distributed Cache?
    3) If so, can I run with just 1 cache host?  If I shouldn't run it on a WFE or an App server with Search, do I have to stop Search all together?  What happens with 2 tier farms out there? 
    4) Reading through the labyrinth of links on TechNet and MSDN on the subject, most of them says "Applies to SharePoint Server".
    5) Anyone out there on a Foundation 2013 production environment that could share your experience?
    Thanks in advance for any help with this!
    Monica
    Monica

    That article is referring to BlobCache, not Distributed Cache. BlobCache requires Publishing, hence Server, but DistributedCache is required on all SharePoint 2013 farms, regardless of edition.
    I would leave your DistCache on the WFE, given the App Server likely runs Search. Make sure you install
    AppFabric CU5 and make sure you make the changes as noted in the KB for
    AppFabric CU3.
    You'll need to separately investigate your disk performance issues. Could be poor disk layout, under spec'ed disks, and so on. A detail into the disks that support SharePoint would be valuable (type, kind, RPM if applicable, LUNs in place, etc.).
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Limitation on number of objects in distributed cache

    Hi,
    Is there a limitation on the number (or total size) of objects in a distributed cache? I am seeing a big increase in response time when the number of objects exceeds 16,000. Normally, the ServiceMBean.RequestAverageDuration value is in the 6-8ms range as long as the number of objects in the cache is less than 16K - I've run our application for weeks at a time without seeing any problems. However, once the number of objects exceeds the magic number of 16K the average request duration almost immediately jumps to over 100ms and continues to climb as more objects are added.
    I'm fairly confident that the cache is indexed properly (as Dimitri helped us with that). Are there any configuration changes that could possibly help out here? We are using Coherence 3.3.
    Any suggestions would be greatly appreciated.
    Thanks,
    Jim

    Hi Jim,
    The results from the load test look quite normal, the system fairly quickly stabilizes at a particular performance level and remains there for the duration of the test. In terms of latency results, we see that the cache.putAll operations are taking ~45ms per bulk operation where each operation is putting 100 1K items, for cache.getAll operations we see about ~15ms per bulk operation. Additionally note that the test runs over 256,000 items, so it is well beyond the 16,000 limit you've encountered.
    So it looks like your application are exhibiting different behavior then this test. You may wish to try to configure this test to behave as similarly to yours as possible. For instance you can set the size of the cache to just over/under 16,000 using the -entries parameter, set the size of the entries to 900 bytes using the -size parameter, and set the total number of threads per worker using the -threads parameter.
    What is quite interesting is that at 256,000 1K objects the latency measured with this test is apparently less then half the latency you are seeing with a much smaller cache size. This would seem to point at the issue being related to or rooted in your test. Would you be able to provide a more detailed description of how you are using the cache, and the types of operations you are performing.
    thanks,
    mark

  • Federation/Distributed Model for SAP PI

    Hello Folks,
    To begin with this is in reference to an insightful blog by Niki Scaglione - Decentral Adapter Engine is Mandatory!
    We have been planning to consider having a distributed model i.e to have our central instance (PI 7.3 dual stack) running along with a decoupled AEX.
    We wanted to run some specific interfaces that are critical to the business and also extract high performance from the decentral AEX. In short, decouple the critical interfaces (run on AEX) from all other interfaces (Central).
    So this brings me to the below questions;
    1. If we use a decentralized AEX, does this mean we will lose out on central monitoring? We definitely dont want to open two URLs to monitor PI.
    2. If we have to reap the benefits of performance improvement, do we really need to install the AEX as a separate host? What does that mean overall? Again, we dont want the instances to run independent of each other i.e once again the emphasis is to have a single monitoring access.
    Do share your thoughts on how you see this architecture.
    CC: Eduardo Chiocconi, Mariana Mihaylova, William Li
    Michal Krawczyk
    Thanks,
    Shabarish

    Hi Shabz,
    I am glad to reply to your question not sure how much it will help in the architecture point of view.Giving a try
    Currently in my landscape we are running all banking and business critical interfaces and huge file size interfaces on Non Central Adapter Engine.Load on central AE reduces and performance increased.
    1. If we use a decentralized AEX, does this mean we will lose out on central monitoring? We definitely dont want to open two URLs to monitor PI.
    Ans : We have to use two URLs for Channel/monitoring however on same host.Currently in my landscape we do monitor channels in both central AE and NCAE
    2. If we have to reap the benefits of performance improvement, do we really need to install the AEX as a separate host? What does that mean overall? Again, we dont want the instances to run independent of each other i.e once again the emphasis is to have a single monitoring access.
    Ans : If you don't want to run independent of each other then your AEX for load balancing/performance will be on same host which your Central adapter engine runs.
    Thanks
    Pawan

  • Airport not distributing DNS servers over network

    Hi everyone,
    I connect to the Internet over ADSL (ISP: Arnet Highway, Buenos Aires, Argentina) using PPPoE from my MacBook Pro.
    I have my ADSL modem connected to the Airport Extreme (802.11n) and distributing IP over DHCP just fine. Every device that joins the network obtains a valid IP.
    However, DNS servers aren't distributed by the router over the network. Every connected device has to be manually configured to set the DNS servers of my ISP to be able to resolve hosts, instead of 'asking' these addresses to the router, as it should be.
    Initially I thought there might be a problem obtainig the DNS servers from the ISP. So in the Airport Utility, in Internet / PPPoE settings, I've manually set my ISP's DNS servers, which should be distributed over the network to all connected devices.
    This doesn't happen, and every somebody new joins my wireless network I have to manually change the DNS servers for that connection which, as I'm sure you'll agree with me, can be quite annoying. Not to mention what would happen if my ISP decides to use dynamic DNS addresses.
    Thanks for any help you might provide.
    Cheers.

    Hello belbo,
    I connect to the Internet over ADSL using PPPoE from my MacBook Pro.
    Is your Macbook Pro Network configured to use PPPoE or DHCP?
    I have my ADSL modem connected to the Airport Extreme (802.11n) and distributing IP over DHCP just fine. Every device that joins the network obtains a valid IP.
    Is NAT enabled on the AE? Are the valid IP Address obtained from your ISP or from the AE?
    However, DNS servers aren't distributed by the router over the network. Every connected device has to be manually configured to set the DNS servers of my ISP to be able to resolve hosts, instead of 'asking' these addresses to the router, as it should be.
    When you setup the AE to use PPPoE did you enter a Domain Name or a DHCP Client ID?
    Initially I thought there might be a problem obtainig the DNS servers from the ISP. So in the Airport Utility, in Internet / PPPoE settings, I've manually set my ISP's DNS servers, which should be distributed over the network to all connected devices.
    The DNS servers listed in the AE aren't distributed to each Network Device but are only used to translate names into IP addresses when need by a Network Device.
    This doesn't happen, and every somebody new joins my wireless network I have to manually change the DNS servers for that connection which, as I'm sure you'll agree with me, can be quite annoying. Not to mention what would happen if my ISP decides to use dynamic DNS addresses.
    If your AE is distributing IP Address using DHCP and NAT then this should not be a problem but I'm not sure without more information about the questions I asked.
    Later.
    Buzz

  • In R12 shared appltop/distributed appltop

    Hi,
    In R12 Plz how can i confirm the environment whether is shared appl_top or distributed appl_top.
    with regards
    Surya

    Hi,
    on one node i have created one file (a.txt) under appl_top.There after on another node (rm a.txt) with that i came to know it is shared.
    and also in configuration file the values are as follow
    TIER_ADADMIN oa_var="s_isAdAdmin">YES</TIER_ADADMIN> <TIER_ADWEB oa_var="s_isAdWeb">YES</TIER_ADWEB> <TIER_ADFORMS oa_var="s_isAdForms">YES</TIER_ADFORMS> <TIER_ADNODE oa_var="s_isAdConc">YES</TIER_ADNODE> <TIER_ADFORMSDEV oa_var="s_isAdFormsDev">YES</TIER_ADFORMSDEV> <TIER_ADNODEDEV oa_var="s_isAdConcDev">YES</TIER_ADNODEDEV> <TIER_ADWEBDEV oa_var="s_isAdWebDev">YES</TIER_ADWEBDEV
    with regards,
    Surya

  • How can I prevent a PDF file from being copied, printed or downloaded? Students should only be able to view the text and and not distribute it in any way.

    How can I prevent a PDF file from being copied, printed or downloaded? Students should only be able to view the text and and not distribute it in any way.

    You can prevent it from being printed by applying a security policy to it
    in Acrobat. The rest can't be prevented, unless you spend  a LOT of money
    on DRM protection.

Maybe you are looking for

  • Installation problem with already installed apache, help!

    Alright, I have oracle 9i client installed on a windows 2000 machine. I'm admin (it is a laptop that dual boots Solaris and Windows 2000). I also had apache 2 and tomcat with jk2 connector. Normally I run a web app off of this laptop, connecting to a

  • Oracle UPDATE syntax with JOIN and WHERE clause

    I need to update one of my tables, and can't seem to get the syntax right. I've tried a 4 different approaches, but not having any luck.  Any help would be greatly appreciated. Attempt #1: Error: SQL command not properly ended UPDATE TESTDTA.F4941 SE

  • NSDictionary doesn't work

    So I want to do an action is a word is the same as in the dictionary. The dictionary is write, so that doesn't need to change, I used it before and it works fine, but when I use it to check if the word is equal to the NSDictionary, it always does the

  • Db- put blocks sometimes, mulit-thread uses Db- put to write DB file

    My environment : redhat Linux 5, gcc version 4.1.2 BDB version: 4.8.24 Hi Guys I used multi-thread application to write DB file. The application used Transactional Data Store and I used flags DB_TXN_NOSYNC . The DB file type is BTREE. I give 180M for

  • Tables used in Views, Trigger

    Hi, Please let me know the query to find the tables used for creating a view and database trigger. Thanks and Regards.