Programatically manage a large number of subviews
Hello again everyone,
I'm building an application with something I can't quite get my head around and I'm hoping someone can offer some suggestions.
The application will have a main view with 4 category buttons, which will each redirect to a specialized view for the category picked. The complication comes in that each of the categories could have 10s maybe 100s of 'items' that will each require a subview if selected.
cat1
cat2
cat3
cat4
|
|- DetailCat4
|
|- item1
|- item2
|- item3
|- item4
|- item5
|- etc...
Now I may have to create each of the subviews from hand as their format and content will vary wildly but it's how to detect the number of them within a category and then fill a table view with them? I don't really want to have to populate an array per hand, but perhaps I'll have to?
Also, does anyone have any comment on how they think Apple would feel about each of these item subviews being UIWebViews rather than traditional xibs?
Thanks, sorry it's a little messy I'm still trying to get it straight in my head.
Greg.
UIWebView's are fine. There are probably tons of applications on the app store with nothing but UIWebView's. I would say that you need some kind of datastore (plist, database, etc.) that represents the hierarchy that you are trying to represent. Once you have that, filling in table views etc. should be fairly trivial.
Similar Messages
-
Photo Management with large iPhoto libraries
I have over 50,000 photos in iPhoto and this slows iPhoto to the point of hanging. I am now importing new photos into a new library but this is a nuisance when wanting to select photos from both libraries for a specific porject. I'm not keen to have to import photos from one library to another. This seems very cumbersome. I have backed up my photos onto an external hard drive with Time Machine.
What's the best and most practical way to manage a large number of photos with many themes.
I'd appreciate some suggestions. I'm quite new to iPhoto.A large SSD. Or external drive. Might help to max out RAM.
Some laptop models can take dual SSD internal
http://www.macsales.com/ssd
Might want to look up the iPhoto forum and support.
https://discussions.apple.com/community/ilife/iphoto
Move iphoto library to external drive -
http://basics4mac.com/article.php/move_iphoto_lib
You probably thought you were in MacBook Pro forum.
https://discussions.apple.com/community/notebooks/macbook_pro
http://www.apple.com/support/macbookpro/ -
I've just changed from a PC to a mac and have a large number of downloaded WMA music files which I can't get into i-tunes. When the library was in Windows, i-tunes would convert the files automatically, but this doesn't happen now. I've downloaded a couple of file converters, but these don't seem to work either. Any ideas?
iTunes for Windows (but not iTunes for Mac) can import and convert unprotected WMA tracks. If the tracks are protected by DRM (Digital Rights Management) then it will not accept them.
One option would be to install iTunes on your PC, do the conversion, and then transfer the converted tracks from iTunes on your PC to iTunes on your Mac. -
After by applying Patch 9440398 as per Oracle's Doc ID 1072226.1, I have successfully created a CODE 128 barcode.
But I am having an issue when creating a barcode whose value is a large number. Specifically, a number larger than around 16 or so digits.
Here's my situation...
In my RTF template I am encoding a barcode for the number 420917229102808239800004365998 as follows:
<?format-barcode:420917229102808239800004365998;'code128c'?>
I then run the report and a PDF is generated with the barcode. Everything looks great so far.
But when I scan the barcode, this is the value I am reading (tried it with several different scanner types):
420917229102808300000000000000
So:
Value I was expecting: 420917229102808239800004365998
Value I actually got: 420917229102808300000000000000
It seems as if the number is getting rounded at the 16th digit (or so, it varies depending of the value I use).
I have tried several examples and all seem to do the same. But anything with 15 digits or less seems to works perfectly.
Any ideas?
MannyYes, I have.
But I have found the cause now.
When working with parameters coming in from the concurrent manager, all the parameters define in the concurrent program in EBS need to be in the same case (upper, lower) as they have been defined in the data template.
Once I changed all to be the same case, it worked.
thanks for the effort.
regards
Ronny -
Problem fetch large number of records
Hi
I want to fetch large number of record from database.and I use secondary index database for improve performance for example my database has 100000 records and query fetch 10000 number of records from this database .I use secondary database as index and move to secondary database until fetch all of the information that match for my condition.but when I move to this loop performance terrible.
I know when I use DB_MULTIPLE fetch all of the information and performance improves but
I read that I can not use this flag when I use secondary database for index.
please help me and say me the flag or implement that fetch all of the information all to gether and I can manage this data to my language
thanks alot
regards
saeedHi Saeed,
Could you post here your source code, that is compiled and ready to be executed, so we can take a look at the loop section ?
You won't be able to do bulk fetch, that is retrieval with DB_MULTIPLE given the fact that the records in the primary are unordered by master (you don't have 40K consecutive records with master='master1'). So the only way to do things in this situation would be to position with a cursor in the secondary, on the first record with the secondary key 'master1' retrieve all the duplicate data (primary keys in the primary db) one by one, and do the corresponding gets in the primary database based on the retrieved keys.
Though, there may be another option that should be taken into consideration, if you are willing to handle more work in your source code, that is, having a database that acts as a secondary, in which you'll update the records manually, with regard to the modifications performed in the primary db, without ever associating it with the primary database. This "secondary" would have <master> as key, and <std_id>, <name> (and other fields if you want to) as data. Note that for every modification that your perform on the std_info database you'll have to perform the corresponding modification on this database as well. You'll then be able to do the DBC->c_get() calls on this database with the DB_MULTIPLE flag specified.
I have other question.is there any way that fetch information with number of record?
for example fetch information that located third record of my database.I guess you're refering to logical record numbers, like the relational database's ROW_ID. Since your databases are organized as BTrees (without the DB_RECNUM flag specified) this is not possible directly.You could perform this if use a cursor and iterate through the records, and stop on the record whose number is the one you want (using an incrementing counter to keep track of the position). If your database could have operated with logical record numbers (BTree with DB_RECNUM, Queue or Recno) this would have been possible directly:
http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/logrec.html
http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/renumber.html
Regards,
Andrei -
Large number of JSP performance
Hi,
a colleague of me made tests with a large number of JSP and identified a
performance problem.
I believe I found a solution to his problem. I tested it with WLS 5.1 SP2
and SP3 and MS jview SDK 4.
The issue was related to the duration of the initial call of the nth JSP,
our situation as we are doing site hosting.
The solution is able to perform around 14 initial invocations/s no matter if
the invocation is the first one or the
3000th one and the throughput can go up to 108 JSPs/s when the JSP are
already loaded, the JSPs being the
snoopservlet example copied 3000 times.
The ratio have more interest than the values as the test machine (client and
WLS 5.1) was a 266Mhz laptop.
I repeat the post of Marc on 2/11/2000 as it is an old one:
Hi all,
I'm wondering if any of you has experienced performance issue whendeploying
a lot of JSPs.
I'm running Weblogic 4.51SP4 with performance pack on NT4 and Jdk1.2.2.
I deployed over 3000 JSPs (identical but with a distinct name) on myserver.
I took care to precompile them off-line.
To run my tests I used a servlet selecting randomly one of them and
redirecting the request.
getServletContext().getRequestDispatcher(randomUrl).forward(request,response);
The response-time slow-down dramaticaly as the number of distinct JSPs
invoked is growing.
(up to 100 times the initial response time).
I made some additional tests.
When you set the properties:
weblogic.httpd.servlet.reloadCheckSecs=-1
weblogic.httpd.initArgs.*.jsp=..., pageCheckSeconds=-1, ...
Then the response-time for a new JSP seems linked to a "capacity increase
process" and depends on the number of previously activated JSPs. If you
invoke a previously loaded page the server answers really fast with no
delay.
If you set previous properties to any other value (0 for example) the
response-time remains bad even when you invoke a previously loaded page.SOLUTION DESCRIPTION
Intent
The package described below is design to allow
* Fast invocation even with a large number of pages (which can be the case
with Web Hosting)
* Dynamic update of compiled JSP
Implementation
The current implementation has been tested with JDK 1.1 only and works with
MS SDK 4.0.
It has been tested with WLS 5.1 with service packs 2 and 3.
It should work with most application servers, as its requirements are
limited. It requires
a JSP to be able to invoke a class loader.
Principle
For a fast invocation, it does not support dynamic compilation as described
in the JSP
model.
There is no automatic recognition of modifications. Instead a JSP is made
available to
invalidate pages which must be updated.
We assume pages managed through this package to be declared in
weblogic.properties as
weblogic.httpd.register.*.ocg=ocgLoaderPkg.ocgServlet
This definition means that, when a servlet or JSP with a .ocg extension is
requested, it is
forwarded to the package.
It implies 2 things:
* Regular JSP handling and package based handling can coexist in the same
Application Server
instance.
* It is possible to extend the implementation to support many extensions
with as many
package instances.
The package (ocgLoaderPkg) contains 2 classes:
* ocgServlet, a servlet instantiating JSP objects using a class loader.
* ocgLoader, the class loader itself.
A single class loader object is created.
Both the JSP instances and classes are cached in hashtables.
The invalidation JSP is named jspUpdate.jsp.
To invalidate an JSP, it has simply to remove object and cache entries from
the caches.
ocgServlet
* Lazily creates the class loader.
* Retrieves the target JSP instance from the cache, if possible.
* Otherwise it uses the class loader to retrieve the target JSP class,
create a target JSP
instance and stores it in the cache.
* Forwards the request to the target JSP instance.
ocgLoader
* If the requested class has not the extension ocgServlet is configured to
process, it
behaves as a regular class loader and forwards the request to the parent
or system class
loader.
* Otherwise, it retrieves the class from the cache, if possible.
* Otherwise, it loads the class.
Do you thing it is a good solution?
I believe that solution is faster than standard WLS one, because it is a
very small piece of code but too because:
- my class loader is deterministic, if the file has the right extension I
don't call the classloader hierarchy first
- I don't try supporting jars. It has been one of the hardest design
decision. We definitely need a way to
update a specific page but at the same time someone post us NT could have
problems handling
3000 files in the same directory (it seems he was wrong).
- I don't try to check if a class has been updated. I have to ask for
refresh using a JSP now but it could be an EJB.
- I don't try to check if a source has been updated.
- As I know the number of JSP I can set pretty accurately the initial
capacity of the hashtables I use as caches. I
avoid rehash.Use a profiler to find the bottlenecks in the system. You need to determine where the performance problems (if you even have any) are happening. We can't do that for you.
-
Large number of JSP performance [repost for grandemange]
Hi,
a colleague of me made tests with a large number of JSP and identified a
performance problem.
I believe I found a solution to his problem. I tested it with WLS 5.1 SP2
and SP3 and MS jview SDK 4.
The issue was related to the duration of the initial call of the nth JSP,
our situation as we are doing site hosting.
The solution is able to perform around 14 initial invocations/s no matter if
the invocation is the first one or the
3000th one and the throughput can go up to 108 JSPs/s when the JSP are
already loaded, the JSPs being the
snoopservlet example copied 3000 times.
The ratio have more interest than the values as the test machine (client and
WLS 5.1) was a 266Mhz laptop.
I repeat the post of Marc on 2/11/2000 as it is an old one:
Hi all,
I'm wondering if any of you has experienced performance issue whendeploying
a lot of JSPs.
I'm running Weblogic 4.51SP4 with performance pack on NT4 and Jdk1.2.2.
I deployed over 3000 JSPs (identical but with a distinct name) on myserver.
I took care to precompile them off-line.
To run my tests I used a servlet selecting randomly one of them and
redirecting the request.
getServletContext().getRequestDispatcher(randomUrl).forward(request,response);
The response-time slow-down dramaticaly as the number of distinct JSPs
invoked is growing.
(up to 100 times the initial response time).
I made some additional tests.
When you set the properties:
weblogic.httpd.servlet.reloadCheckSecs=-1
weblogic.httpd.initArgs.*.jsp=..., pageCheckSeconds=-1, ...
Then the response-time for a new JSP seems linked to a "capacity increase
process" and depends on the number of previously activated JSPs. If you
invoke a previously loaded page the server answers really fast with no
delay.
If you set previous properties to any other value (0 for example) the
response-time remains bad even when you invoke a previously loaded page.SOLUTION DESCRIPTION
Intent
The package described below is design to allow
* Fast invocation even with a large number of pages (which can be the case
with Web Hosting)
* Dynamic update of compiled JSP
Implementation
The current implementation has been tested with JDK 1.1 only and works with
MS SDK 4.0.
It has been tested with WLS 5.1 with service packs 2 and 3.
It should work with most application servers, as its requirements are
limited. It requires
a JSP to be able to invoke a class loader.
Principle
For a fast invocation, it does not support dynamic compilation as described
in the JSP
model.
There is no automatic recognition of modifications. Instead a JSP is made
available to
invalidate pages which must be updated.
We assume pages managed through this package to be declared in
weblogic.properties as
weblogic.httpd.register.*.ocg=ocgLoaderPkg.ocgServlet
This definition means that, when a servlet or JSP with a .ocg extension is
requested, it is
forwarded to the package.
It implies 2 things:
* Regular JSP handling and package based handling can coexist in the same
Application Server
instance.
* It is possible to extend the implementation to support many extensions
with as many
package instances.
The package (ocgLoaderPkg) contains 2 classes:
* ocgServlet, a servlet instantiating JSP objects using a class loader.
* ocgLoader, the class loader itself.
A single class loader object is created.
Both the JSP instances and classes are cached in hashtables.
The invalidation JSP is named jspUpdate.jsp.
To invalidate an JSP, it has simply to remove object and cache entries from
the caches.
ocgServlet
* Lazily creates the class loader.
* Retrieves the target JSP instance from the cache, if possible.
* Otherwise it uses the class loader to retrieve the target JSP class,
create a target JSP
instance and stores it in the cache.
* Forwards the request to the target JSP instance.
ocgLoader
* If the requested class has not the extension ocgServlet is configured to
process, it
behaves as a regular class loader and forwards the request to the parent
or system class
loader.
* Otherwise, it retrieves the class from the cache, if possible.
* Otherwise, it loads the class.
Do you thing it is a good solution?
I believe that solution is faster than standard WLS one, because it is a
very small piece of code but too because:
- my class loader is deterministic, if the file has the right extension I
don't call the classloader hierarchy first
- I don't try supporting jars. It has been one of the hardest design
decision. We definitely need a way to
update a specific page but at the same time someone post us NT could have
problems handling
3000 files in the same directory (it seems he was wrong).
- I don't try to check if a class has been updated. I have to ask for
refresh using a JSP now but it could be an EJB.
- I don't try to check if a source has been updated.
- As I know the number of JSP I can set pretty accurately the initial
capacity of the hashtables I use as caches. I
avoid rehash.
Cheers - WeiI dont know the upper limit, but I think 80 is too much. I have never used more than 15-20. For Nav attributes, a seperate tables are created which causes the Performance issue as result in new join at query run time. Just ask your business guy, if these can be reduced.One way could be to model these attributes as seperate characteristics. It will certainly help.
Thanks...
Shambhu -
Is there a way of combining a large number of materials into a grouping cod
Greetings,
Is there a way of combining a large number of materials into a grouping code, so that and end-user can manage this grouping... adding and removing materials... for auctions?
The materials will likely span material groups in R/3, and there could be thousands of materials. Shopping cart templates would be cumbersome.
I was thinking about a special catalog characteristic with defined names to manage the various groups of materials.
I appreciate any advice... also in awarding points.
JessicaHi Jessica
This is my understanding:
- You want to 'Specially group' materials other than material group. Each special group, may consists of materials from many material groups.
- User(s) pick materials from this group during 'Auction' creation or 'shopping cart' creation.
Am I right ?
Looks like using 'Catalog' is the option.
- MDM Catalog 'mask' or other Catalogs views to have user specific catalog view
- Maintenance of this catalog by user could become a problem. Then, you need to give content mgmt rights to user
Best regards
Ramki -
Large number of FNDSM and FNDLIBR processes
hi,
description of my system
Oracle EBS 11.5.10 + oracle 9.2.0.5 +HP UX 11.11
problem : ther are large number of FNDSM , FNLIBR and sh processes during peak load around 300 , but even at no load these processes dont come down , though oracle processes
come down from 250 to 80 but these apps processes just dont get killed automatically .
can i kill these processes manually??
one more thing , even after stopping apllications with adstpall.sh , these processes dont get killed , is it normal??, so i just dismount database so as to kill these processes
and under what circumstances , should i run cmclean ?Hi,
problem : ther are large number of FNDSM , FNLIBR and sh processes during peak load around 300 , but even at no load these processes dont come down , though oracle processesThis means there are lots of zombie processes running and all these need to be killed.
Shutdown your application and database and take a bounce of the server as there are too many zombie processes. I have once faced the issue in which due to these zombie process CPU utilization has gone to 100% on continuous count.
Once you restart the server, start database and listener run cmclean and start the application services.
one more thing , even after stopping apllications with adstpall.sh , these processes dont get killed , is it normal??, so i just dismount database so as to kill these processesNo it's not normal and should not be neglected. I should also advice you to run the [Oracle Application Object Library Concurrent Manager Setup Test|https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=200360.1]
and under what circumstances , should i run cmclean ?[CMCLEAN.SQL - Non Destructive Script to Clean Concurrent Manager Tables|https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=134007.1]
You can run the cmclean if you find that after starting the applications managers are not coming up or actual processes are not equal to target processes.
Thanks,
Anchorage :) -
Fastest way to handle and store a large number of posts in a very short time?
I need to handle a very large number of HTTP posts in a very short period of time. The handling will consist of nothing more than storing the data posted and returning a redirect. The data will be quite small (email, postal code). I don't know exactly how
many posts, but somewhere between 50,000 and 500,000 over the course of a minute.
My plan is to use the traffic manager to distribute the load across several data centers, and to have a website scaled to 10-instances per data center. For storage, I thought that Azure table storage would be the ideal way to handle this, but I'm not sure
if the latency would prevent my app from handling this much data.
Has anyone done anything similar to this and have a suggestion for storing the data? Perhaps buffering everything into memory would be ideal and then batching from there to table storage. I'm starting to load-test the direct to table-storage solution and
am not encouraged.You are talking about a website with 500,000 posts per minute with re-direction, so you are talking about designing a system that can handle at least 500,000 users? Assuming that not all users are doing posts within a one minute timeframe, then you
are talking about designing a system that can handle millions of users at any one time.
Event hub architecture is completely different from the HTTP post architecture, every device/user/session writes directly to the hub. I was just wondering if that actually work better for you in your situation.
Frank
The site has no session or page displaying. It literally will record a few form values posted from another site and issue a redirect back to that originating site. It is purely for data collection. I'll see if it is possible to write directly to the event hub/service
bus system from a web page. If so, that might work well. -
Status and messaging for systems with a large number of classes
Very common dilemma we coders face when creating
systems involving a large number of classes:
Any standard framework to take care of the global status of the whole application and of gui subsystems
for both status handling and report messages to users?
Something light if possible...and not too much threads.Ah, I see,
I found JPanel with CardLayout or a JTabbedPane very good for control of several GUI in an application - as alternative organization tool I use a JTree, which is used for both, selecting and organizing certain tasks or data in the application - tasks are normally done with data associated with them (that is, what an object is for), so basically a click onto a node in this JTree invokes an interface method of that object (the userObject), which is associated with this node.
Event handling should be done by the event-handling-thread only as far as possible - it is responsible for it so leave this job to it. This will give you control over the order in which the events are handled. Sometimes it needs a bit more work to obey this rule - for example communication coming from the outside (think of a chat channel for example) must be converted to an event source driven by a thread normally. As soon as it is an event source, you can leave it's event handling to the event handling thread again and problems with concurrent programming are minimized again.
It is the same with manipulating components or models of components - leave it to the event handling thread using a Runnable and SwingUtilities.invokeLater(Runnable). This way you can be sure that each manipulation is done after the other in the order you have transferred it to the event handling thread.
When you do this consequently most of your threads will idle most of the time - so give them a break using Thread.sleep(...) - not all platforms provide preemptive multitasking and this way it is garanteed, that the event handling thread will get a chance to run most of the time - which results in fast GUI update and fast event handling.
Another thing is, that you should use "divide and conquer" also within a single GUI panel - place components in subpanels and transfer the responsibility for the components in this panel to exactly these subpanels - think of a team manager which makes his employees work together. He reports up to his super manager and transfers global order from his boss into specific tasks by delegation to the components, he is managing. If you have this in mind, when you design classes, you will have less problem - each class is responsible for a certain task - define it clearly and define to whom it is reporting (its listeners) and what these listeners may be interested in.
When you design the communication structure within your hierarchy of classes (directors, managers, team managers and workers) have in mind, that the communication structure should not break the management hierarchy. A director gives global orders to a manager, which delegates several tasks to the team managers, which make their workers do what is needed. This structure makes a big company controlable by directors - the same principles can also keep control within an application.
greetings Marsian -
Project Server 2010 - large number of security categories
I am searching for somebody who has experiance with a large number of security categories.
I am talking about a number of 1000 different PS security categories in one instance.
Beside the forseeable usability issues, are there any technical side effects which could appear?
Thanks in advance for your feedbackHi,
thanks for your reply, specially for the technical remark what I was looking for.
Ofcause we have investigeted in all other options (RBS, ProjectsPermissions, simplify the security concept, discussed the requirements and discussed the process, trainings, departments).
Finally we have for this customer the challange to have multiple roles in multiple projects. But with the restriction not to add some permissions based on the different roles and the Project Server Standard security options.
One example: We have a user which has two roles (commercial manager and team member). The role can only be assigned to the projects team because all other options are not possible (owner, status manager and all RBS options). This user must act in the first
project as a commercial manager with more rights then when he is assigned in a second project only as a team member. No greyzone is allowed.
The final analyses for this requirement was (and I dont beleive there is an other possible way):
Define for each commercial manager a own security category and list the projects where he can act as a commercial manager.
Remember: All other ProjectServer security possibilities are impossible ( in our point of view). -
HI
I was wondering if there exists a software that handles group mails for a large number of recipients (like 400 adresses) at one time.
I think mail doesn't accept more than 50 adresses /mail at a time.
I used to use a PC based software called sarbacane designed for mass emailing (not spamming, that's understood .. !), with charts, statistics and remailing automatically to adresses that did not function etc ... does that exist for Mac ??
Thank you anyone who has the answer !!
StephanieHello Stephanie
I do the same, and ran into the same problem. It was my ISP that was limiting how many emails I could send at once. I 'think' (because I'm not a tech person) that Mail takes the message and puts 50 or more address at the top. ISPs will see this as a possible spam mail out. I worked around this by creating groups of less than 50 email addresses (using Smart Groups), but after awhile it got very tedious having to sort and create criteria to keep each group to just less than 50.
There are commercial programs that manage large emailings. I use 'Mailings', and it works fine. I know that there are others as well. Most have a free trial version.
Instead of sending one message with lots of addresses, it sends the email multiple times - once to each person in your list. It takes longer (but it works in background, so who cares) but I have been able to send out to hundreds of addresses with no issues. Hope this helps.
Seth -
Large number of sequences on Oracle 8i
One possible solution to an issue I am facing is to create a very large number(~20,000) of sequences in the database. I was wondering if anybody has any experience with this, whether it is a good idea or I should find another solution.
Thanks.Why not use one (or certainly less than 20000) sequence(s) and feed all your needs from it (them) ? Do your tables absolutely require sequential numbers or just unique ones ????
I had 6 applications a few years ago sharing the same database, about 80% of the tables in each application used sequences for primary key values and I fed each system off of one sequence.
All I was after was a unique id, so this worked fine. Besides in any normal course of managing even an OLTP system, you're bound to have records deleted, so there will be "holes" in the numbering anyway. -
Should we create large number of folders within a list?
For a custom list having 50K item it is more convenient for user to create folders to segregate custom list items logically. But if we consider performance, is it recommended to create large number folders of within a custom list?
What are pros and cons of creating folders within a custom list?Hi SunilKumar,
In a SharePoint list, the folder can also be seen as list item, considering to the large amount of list items existing in your list, the influence of these extra folders
to your site won’t be so apparently.
However, for a better item management, using extra columns for grouping items would be more recommended.
Best regards,
Patrick
Patrick Liang
TechNet Community Support
Maybe you are looking for
-
I've downloaded the macromedia flash player several times but I can never get it to work. When I open a file it doesn't do anything. It works when I use VLC but I can't fast forward to do anything except watch the clip. Why doesn't macromedia work?
-
Monitoring thread usage for ExecutorService objects
Is there a way to find out how many active threads there are running in an ExecutorService? I am using a fixed thread pool (Executors.getFixedThreadPool()) and I want to monitor thread count in order to see if the pool size I chose is too small.
-
Responsablity Is not showing On Frontpage
Hi experts, We are using R12.0.6 EBS in Win server 2003. we add one responsablity for the particular user,it added. But when we login as a user, its not showing the added responsablity.when we restart the server its showing. but every time we cant do
-
Converting Publisher files to InDesign
I'm currently searching for a cost efficient and reliable way to convert Publisher files to InDesign. Any suggestions for converting these files would be greatly appreciated. Thank you!
-
How to install Ricoh mp 4000 on my mac ?
How to install Ricoh mp 4000 on my mac ?