Automating Complex jobs - advice needed (all are welcome)

Hi all,
We have successfully implemented XMl workflow with fully automated through script which will place all tables and images as per citation etc. and its working fine (this jobs are one time script writing since its a same style and same layout) and its a magazine & journals.
Now we are concentration to automate the book we know that its not one time script since each book has different elements and styles and boxes.
Here what we need a adivce from the scripting guys how to tackle these types of projects.
The projects which we have is high complex jobs lots of boxes (each box has its own design). We are going to take the book projects in XML workflow using DOCbook.
All the boxes are placed in the library, we have a question if we placed the boxes styles in library is the script is capable of draging the appropriate boxes from library and place the text automatically using script. Since we did't try using the library.
Sorry I am through ideas which we have anybody who came across with the complex jobs automation will give some ideas of how to tackle these types of projects.
Thanks...................
Kavya

Do you want "general" advice or something more specific for your project?
Generally, when I have a large project, I like to break it down to smaller components. I script one simple action to make sure it all works. Then I try another part of the job and make sure all those commands work. Once I have about 60% of the core functions worked out then I start to combine them into a workflow application (I use XCode to develop Applescript applications). In each step I make sure to code it for flexibility for future changes.
As for dealing with library objects like  you mention, I have not tried to work with libraries. You should make sure that you can script the library objects, if that is how you are going to fill items or build a document.
Chris

Similar Messages

  • Applying for mortgage soon and need advice. All responses welcome

    I plan to apply for a mortgage pre-approval in 02/2016 and want to remove all negative information from my credit reports before doing so.  The only thing standing between me and a clean credit report is Bank of America and a CC i had with them. Late payments are 30 days late Nov, 60 days late Dec 2008, 30 days late April 2009, 60 days late May 2009, and 90 days late June 2009 and  30 days late January 2011. I have sent 2 goodwill letters to their general address from the credit report and one email to the CEO and they shut me down each time by saying that the information is being reported accurately.  I believe that they are reporting inaccurate information and I think I would be able to have this removed from my report, but is it worth it at this time? Its been 6 years since 5 of the 6 late payments occurred and was wondering if removing the late payments would boost my score or hurt my score due to losing the age of the account, which was opened in 2005. My current AAoA is 4 years 4 months. The inaccurate information shows that the account was closed and opened multiple times, and the last reported date was 2/2015 despite my last payment being made in 2/2014 to pay off the old money owed on the card. Also, its showing a $25 monthly payment. Any suggestions? 

    The saga continues.... went round to my mates and we cleared the cmos, and tried removing one mem stick and tried rebooting, then the other still nothing....except this time when we try to boot the puter gives out a continuous peep and the LED's still stay lit.  Anyways after much trying of different ways, we finally declared it dead at 1600 hours, cause of death....heatstroke lol.
    anyway, with tears rolling down my face, i ordered a new Aesus 64 bit  939 socket mb, a new 6800 pci expres gfx card and new water blocks, my mate kindly donated a new 939 64 3700 processor, old but hey beggars cant be choosers.
    So guys the probs not sorted but cured if you know what i mean, and thanks to all for their support and advice, its a great community you have here, if i can ever be of assistance, then please dont hesitate to gimme a shout.
    Regards and respect

  • Complex query advice needed

    I have a mysql table that basically represents a log of daily events. I need to be able to get the total number of events for each week in a year. How can I efficiently query the database and tally the toal records for each week in a year, given that my events are recorded daily? Any advice would be much appreciated. Thanks.

    Presumably the events have a timestamp.
    One possibility (you need to deal with border conditions)
    1. Find a function that returns day of the year from a timestamp (dayInYear)
    2. Determine offset from beginning of year to 'first week' in year. (Offset)
    3. week = (dayInYear(timestamp) - Offset)/7
    4. Write a query that does a sum using group by on the week value.

  • Advice needed on Hyperion Job Opportunities

    Hello,
    I'm trying to venture into Hyperion Consulting and I need some advice on Hyperion Job Opportunities. I thought this would be a good place to ask.
    Let me give you my background. I have a Bachelors degree in Computer Engineering, a Masters degree in Computer Science, and an MBA in Finance from a TOP 15 Business School in the U.S. I have been working as a Senior Financial Analyst for the past 5 yrs primarily doing Forecasting, Budgeting etc using Excel. I'm an advanced user of Excel proficient in vlookups, pivot tables etc. I've also used Hyperion sparingly to get budget and actuals numbers from the Hyperion system.
    Now,I would like to get into Hyperion Consulting. I'm debating whether I should learn Hyperion Planning or HFM (Hyperion Financial Management) and I'm not able to decide. I want to base my decision on which one of these two systems (HFM/Hyperion Planning) has more job opportunities and pays a higher billing rate.
    I've been told that job opportunities in HFM are much less than Hyperion Planning but the billing rates for HFM are higher than Hyperion Planning. Is this true?
    I would really appreciate if anyone can advice me on this issue. It would be great if you could also provide me with the average billing rates for both HFM and Hyperion.
    Thank you very much in advance for your help

    Hi,
    This topic is quite controversial for many consultants who chose to carry on with Planning. Let us be honest for a minute: As Planning consultants, we don't like HFM'ers much, but there is a reason behind it. They tend to perceive us like IT geeks. Well, you may argue that it's not particularly a bad thing but are we not more than that? Once I had a rather strange conversation with the head of HFM team in my previous firm. They were trying to hire someone to their team and interviewing a few people weekly but couldn't find anyone for a long time. I happened to ask him how it was going and he said: "well, you know, physical appearence and candidate's care for outfit is very important for us. We are not like you guys, we are dealing with CFOs and even CEOs s". I replied: "yeah, you are right, we do the design with janitors."
    But on the contrary, most of the time Planning projects urge you to interact with a much larger community within the company than that of HFM's. I can say, for example, I have discussed with production unit heads about the processes involved in producing steel products, or petrochemicals. I have hands-on knowledge about telecommunication, insurance and government sectors. That's why it's so much fun for me and I love doing it.
    About how well you make, it might be true that HFM consultants are paid little more than what Planning conultants are paid on initial grades. But as you move ahead in your carreer, the gap is closed. You can check monster or glassdoor yourself, Planning architects are paid more than HFM counterparts by a significant margin.
    It's I think with little argument is a fact that, Planning requires you to be little more technical compared to HFM but in return you have more fun and gain more experience with it. It's a personal choice of course and I know people who does both. The world needs HFM'ers as much as it needs Planning consultants, well, maybe half as much.
    Good luck.
    Cheers,
    Alp

  • Site Survey Theory .... All Comments, Thoughts, and Theories are welcome

    Hello All,
    I am seeking opinions about a survey methodology that was told to me today.  I have my own thoughts about it but I just want to make sure I'm not off base so any help is greatly appreciated.  The requirements for the wireless network are as follows:
    - Data and Voice
    - Clients will utilize all bands but newer devices will be pushed to 5GHz 
    - Clients are laptops and iPads right now but the future goal is to implement BYOD
    So, here is the methodology that was told to me ... "We survey only in the 5GHz spectrum because if you have coverage in that spectrum (5GHz) the coverage in the 2.4GHz will be the fine or even better than that.  Therefore, we feel comfortable not capturing the data in the 2.4GHz spectrum." 
    Thoughts about the above statement are welcome!!!
    Thanks,
    Malwan

    The essential question for a high-density design is how many channels for each band will be needed to match the client base? This can be a tricky question since even dual band capable clients do not always select the faster 5 GHz band. Since bandwidth in 2.4 GHz is going to be limited, 5 GHz must be relied on to reach the goal.
    Dual band adapters have been shipping with most laptops for some time. This does not mean that every laptop is a dual band client, but many are. Simply having a dual band client does not guarantee that it will choose 5 GHz over 2.4 GHz. The Microsoft Windows operating system defaults to a Wi-Fi channel search that starts with the 5 GHz channel 36 and continues searching through all of the 5 GHz channels that the client is capable of. If no 5 GHz AP is found then it will continue the search in 2.4 GHz starting at channel 1. Unless the Windows default is changed or the user has chosen a third party Wi-Fi utility to set spectrum preference to 2.4 GHz, the client radio will first try to associate to a 5 GHz AP. Apple Computer's latest release for Atheros and Broadcom chipsets also searches 5 GHz first.
    The Cisco BandSelect feature enables the infrastructure to optimize these types of client connection choices. Where possible, it helps make sure that devices are attaching to the 5 GHz spectrum channels where interference sources tend to be significantly lighter. A much greater channel selection leads to the alleviation of bandwidth challenges.
    Tablet computers and smartphones have begun entering the market at a staggering rate. The vast majority of smartphones shipping today operate in 2.4 GHz only. While many of them are 802.11n clients, of these most have implemented a single input single output (SISO) rather than Multiple Input, Multiple Output (MIMO). A SISO device is only capable of supporting MCS7 data rates, or 54 Mbps.

  • I keep getting an error code that my billing info doesnt match my card info.  I verified with my bank, the bank card, and my itunes account and all are exactly the same.  I need this crap fixed, how?

    I keep getting an error that my billing info for my card doesnt match my bank info, but it does.  Nothing has ever been changed.  I called my bank, the card company, and all are exactly the same, word for word as my billing info in iTunes.
    In fact, every time I get the error code that billing doesnt match, my card gets charged a $1.00 authorization fee.  It later falls off, but it shows my card is being billed when they say they cant verify billing info.
    I called Apple support and they want to charge me to get their crap together.
    I need to do some updates and by a couple apps for work and am about ready to go to ANdroid if I cant get it fixed.
    $##@@$%%$##$%%$$%$!!!!!!!!!!!

    If you're in the US, this is usually caused by the simple fact that what the Postal Service shows as your address does not match what you're entering. Go here & see:
    https://tools.usps.com/go/ZipLookupAction!input.action

  • I have several macs with Intel core duo 2 operating systems. All are running os x10.6.8. Is it necessary to upgrade to Lion before upgrading to mountain lion when it comes out this summer? Do I need to have the "base" version of 10.7?

    I have several macs with Intel core duo 2 operating systems. All are running os x 10.6.8. Is it necessary to upgrade to Lion before upgrading to Mountain Lion when it appears this summer? Do I need to have the "base" version of 10.7 first?

    This is unknown, it will be up to Apple and AFAIK they haven't announced that yet.

  • Hi all, as part of my job i need to educate the end users about BW..

    Hi All,
    As a part of my job i need to educate the End users about the BW. So, for that purpose i need prepare on fine doucment which is easy to understand by end user community. Any documents about this will help a lot.
    Could you pls send to this mail id [email protected]
    Points will be awarded for this..
    thanks in advacne..
    Thanks & Regards,
    Suresh..

    Hi
    I dont know what everyone is sending, but there is No standard BW end user manual. Every company / project has different requirements and the queries and reports generated contain different data. Its best to use one of the templates available in WIKI. A simple document for end users should contain screenshots of the query/application and steps taken in order to access the query. If you have sharepoint or some other shared directory you can also add screencam videos training users how to use the BW system. This has always been a good way to teach end users.
    Hope this helps.
    Thanks

  • HT204022 I have 1200 pics on camera roll and 900 on photo stream. 1. why are they different? 2. if all are supposedly backed up on iCloud if I delete all photos on my phone will they still be on iCloud or do I need to separately download them to my pc fir

    I have 1200 pics on camera roll and 900 on photo stream. 1. why are they different? 2. if all are supposedly backed up on iCloud if I delete all photos on my phone will they still be on iCloud or do I need to separately download them to my pc first?

    Photos are only uploaded to iCloud after photo stream is enabled.  It's possible that you already had 300 photos in your camera roll when you turn it on so they weren't uploaded.  Also, photo stream also only maintains photos for 30 days, although earlier photos already streamed to your device from photo stream are not deleted.  If you had turned photo stream off, then back on, and the 300 photos fell outside of this 30-day window at the time, they would not be in your photo stream album.  (FYI, photo stream will also only save the last 1000 photos.)
    If you delete the photos on your phone from the camera roll album, photo stream will be not be effected.  If you delete them from the photo stream album on your phone, they will be deleted from your photo stream on your phone and any other devices connected to the same iCloud account/photo stream.
    To keep your camera roll photos permanently, don't rely on photo stream as a backup.  Import them to your computer (http://support.apple.com/kb/HT4083).

  • I rented and downloaded two movies from iTunes via my iPad. They show up as purchased in items but won't play. They do not show up in iPad visor app at all. Any ideas on what happened and how to fix are welcome. Thanks

    I rented two movies from iTunes using my iPad. They show us as purchased but will not run from there. I get a message saying I have 24hours to watch then it just goes back to iTunes. I looked in video app where they usually go and nothing is there. Any ideas for a fix are welcome?
    Thanks

    You did try tapping on one of them in that list in the iPod app! This has happened to me and I was able to start the movie from there. You could try restarting the iPad, go into the iPod app and try again.
    Now I'm stumped unless it is a corrupt download. You did download right onto the iPad - correct? If you downloaded in iTunes, there is a Move tab or function that you have to select to transfer the movies to the iPad when you sync. They just don't sync over without selecting move.

  • Advice needed: is BDB a good fit for what I aim at?

    Hello everyone,
    I'm not a BDB user (yet), but I really think that this the BDB library
    IS the perfect fit for my needs.
    I'm designing an application with a "tricky" part, that requires a very fast
    data storage/retrieval solution, mainly for writes (but for reads too).
    Here's a quick summary of this tricky part, that should at least use
    2 databases:
    - the first db will hold references to contents, with a few writes per hour
    (the references being "pushed" to it from a separate admin back end), but
    expected high numbers of reads
    - the second db will log requests and other events on the references
    contained in the first db: it is planned that, on average, one read from DB1
    will produce five times as much writes into DB2.
    To illustrate:
    DB1 => ~25 writes / ~100 000 reads per hour
    DB2 => ~500 000 writes / *(60?) reads per hour
    (*will explain about reads on DB2 later in this post)
    Reads and writes on both DBs are not linear, say that for 500 000 writes
    per hour, you could have the first 250 000 being done within 20 minutes,
    for instance. There will be picks of activity, and low activity phases
    as well.
    That being said, do the BDB experts here think that BDB is a good fit for
    such a need? If so or if not, could you please let me know what makes you
    think what you think? Many thanks in advance.
    Now, about the "*(60?) reads per hour" for BD2: actually, data from DB2
    should be accessed in real time for reporting. As of now, here is what
    I thing I should do to insure and preserve a high rate throughput not to
    miss any write in DB2 => once per minute another "DB2" is created that will
    now record new events. The "previous" DB2 is now dumped/exported into another
    database which will then be queried for real-time (not exactly real-time,
    but up to five minutes is an acceptable delay) reporting.
    So, in my first approach, DB2 is "stopped" then dumped each minute, to another
    DB (not necessarily BDB, by the way - data could probably re-structured another
    way into another kind of NoSQL storage to facilite queriing and retrieval
    from the admin back end), which would make 60 reads per hour (but "entire"
    reads, full db)
    The questions are:
    - do you think that renewing DB2 as often would improve or strain performances?
    - is BDB good and fast at doing massive dumps/exports? (OK: 500 000 entries per
    hour would make ~8300 entries per minute on average, so let's say that a dump's
    max size is 24 000 rows of data)
    - would it or not be better to read directly into the current DB2 as it is
    storing (intensively) new rows, which would then avoid the need to dump each
    minute and then provide more real-time features? (then would just need a daily
    dump, to archive the "old" data)
    Anyone who has had to face such questions already is welcome, as well as
    any BDB user who think they can help on this topic!
    Many thanks in advance for you advice and knowledge.
    Cheers,
    Jimshell

    Hi Ashok
    Many thanks for your fast reply again :)
    Ashok_Ora wrote:
    Great -- thanks for the clarification.Thank YOU, my first post was indeed a bit confusing, at least about the reads on DB2.
    Ashok_Ora wrote:
    Based on this information, it appears that you're generating about 12 GB/day into DB2, which is about a terabyte of data every 3 months. Here are some things to consider for ad-hoc querying of about 1 TB of data (which is not a small amount of data).That's right, this is quite a huge lot of data, and will keep growing, and growing... Although the main goal of the app is to be able to achieve (almost) real time reporting, it will also need to be able (potentially) to compute data over different time ranges, including yearly ranges for instance - but in this case, the real time capabilities wouldn't be relevant, I guess: if you look at some data on a year span, you probably don't need it to be accurate on a dayly interval, for instance (well, I guess), so this part of the app would probably only use the "very old" data (not the current day data), whatever it is stored in...
    Ashok_Ora wrote:
    Query performance is dramatically improved by using indexes. On the other hand, indexing data during the insert operation is going to add some overhead to the insert - this will vary depending on how many fields you want to index (how many secondary indices you want to create). BDB automatically indexes the primary key. Generally, any approach that you consider for satisfying the reporting requirement will benefit from indexing the data.> Thanks for pointing that out! I did envisage using indexes, but my concern was (and you guessed it) the expectable overhead that it brings. At this stage (but I may be wrong, this is just a study in progress, that will also need proper tests and benchmarking), I plan to favour write speed over anything else, to insure that all the incoming data is indeed stored, even if it is quite tough to handle in the primary stored form.
    I prefer to envisage (but again, it's not said that it is the right way of doing it) very fast inserts, then possibly re-process (sort of) the data later, and (maybe? certainly?) elsewhere, in order to have it more "query friendly" and efficient for moderately complex queries for legible reports/charts.
    Ashok_Ora wrote:
    Here are some alternatives to consider, for the reporting application:
    - Move the data to another system like MongoDB or CouchDB as you suggest and run the queries there. The obvious cost is the movement of data and maintaining two different repositories. You can implement the data movement in the way I suggested earlier (close "old" and open "new" periodically).This is pretty much "in line" with what I had in mind when posting my question here :).
    I found out in several benchmarks (there are not a lot, but I did find some ^^) that BDB amongst others is optimized for bunch queries, say that retrieving a whole lot of data is faster that, for instance, retrieving n times the same row. Is that right? Now, I guess that this is tightly related to the configuration and the server's performances...
    The process would then feed data into a new "DB2" instance every 60 seconds, and "dumping"/merging the previous one into another DB (BDB or else), which would grow until some defined limit.
    Would the "old DB2" > "main, current archive" be a heavy/tricky process, according to you? Especially as the "archive" DB is growing and growing - what would be a decent "limit" to take into account? I guess that 1TB for 3 months of data would be a bit big, wouldn't it?
    Ashok_Ora wrote:
    - Use BDB's SQL API to insert and read data in DB1 and DB2. You should be able to run ad-hoc queries using SQL. After doing some experiments, you might decide to add a few indices to the system. This approach eliminates the need to move the data and maintaining separate repositories. It's simpler.I read a bit about it, and this is indeed very interesting capabilities - especially as I know how to write decent SQL statements.
    That would mean that DB2 could grow more than just within a 60 seconds time span - but would this growing alter the write troughput? I guess so... This will require proper tests, definitely.
    Now, I plan the "real" data (the "meaningfull part of the data"), except timestamps, to be stored in quite a "NoSQL" way (this term is "à la mode"...), say as JSON objects (or something close to it).
    This is why I envisaged MongoDB for instance as the DB layer for the reporting part, as it is able to query directly into JSON, with a specific way to handle "indexes" too. But I'm no MongoDB expert in any way, so I'm not sure at all, again, that it is a good fit (just as much as I'm not sure right know what the proper, most efficient approach is, at this stage).
    Ashok_Ora wrote:
    - Use the Oracle external table mechanism (Overview and how-to - http://docs.oracle.com/cd/B28359_01/server.111/b28319/et_concepts.htm) to query the data from Oracle database. Again, you don't need to move the data. You won't be able to create indices on the external tables. If you do want to move data from the BDB repository into Oracle DB, you can run a "insert into <oracle_table> select * from <external_table_in_DB2>;". As you know, Oracle database is excellent database for all sorts of applications, including complex reporting applications.
    This is VERY interesting. VERY.
    And Oracle DB is, you're, a very powerful and flexible database for every kind of processes.
    I'll look into the docs carefully, many thanks for pointing that out (again!) :)
    I have not yet decided if the final application would be free nor open source, but this will eventually be a real question. Right now, I don't want to think of it, and just find the best technical solution(s) to achieve the best possible results.
    And BDB and Oracle DB are very serious competitors, definitely ;)
    Ashok_Ora wrote:
    Hope this was helpful. Let me know your thoughts.It definitely is so much useful! Makes things clearer and allow me to get more into BDB (and Oracle as well with your latest reply), and that's much appreciated. :)
    As I said, my primary goal is to insure the highest write throughput - I cannot miss any incoming data as there is no (easy/efficient) way to re-ask for what would be lost and get it again being sure that it hadn't changed (the simple act of re-asking would induce data flaws, actually).
    So, everything else (including reporting, stats, etc.) IS secondary, as long as what comes in is always stored for sure (almost) as soon as it comes in.
    This is why, in this context, "real" real-time is not really crucial, an can be "1 minute delayed" real time (could even be "5 minute delayed", actually, but let's be a bit demanding ^^).
    Ashok_Ora wrote:
    Just out of curiousity, can you tell us some additional details about your application?Of course, I owe you a bit more details as you help me a lot in my research/study :)
    The application is sort of a tracking service. It is primarily thought to serve the very specific needs of a client of mine: they have several applications that all use the same "contents". Those contents can be anything, text, HTML, images, whatever, and they need to know almost in real time what application (used by which external client/device) is requesting ressources, which ones, from where, in which locale/area and language, etc.
    Really a kind of "Google Analytics" stuff (which I pointed out at the very beginning, but they need something more specific, and, above all, they need to keep all the data with them, so GA is not a solution here).
    So, as you can guess, this is pretty much... big. On the paper, at least. Not sure if this will ever be implemented one day, to be honest with you, but I really want to do the technical study seriously and bring the best options so that they know where they plan to go.
    As of me, I would definitely love it if this could become reality, this is very interesting and exciting stuff. Especially as it requires to see things as they are and not to fall into the "NoSQL fashion" for the sake of being "cool". I don't want a cool application, I want an efficient one, that fits the needs ;) What is very interesting here is that BDB is not new at all, though it's one of the most serious identified players so far!
    Ashok_Ora wrote:
    Thanks and warm regards.
    ashokMany thanks again, Ashok!
    I'll leave this question opened, in order to keep on posting as I'm progressing (and to be able to get your thoughts and rewarding comments and advice above all :) )
    Cheers,
    Jimshell

  • Database advice needed

    Since I have no knowledge of databases, I'm turning to my fellow Arch users for advice.  I would like to have a collection of quotes.  I don't believe something as simple as fortune text files would be sufficient, because I would like to have the ability to add meta information which classifies the subject or nature of a quote:  tagging the moral of one of Aesop's fables as 'moral' or tagging a quote from Jefferson as 'personal liberty' to signify the subject of the quote.  A database, as far as I know, will help simplify the maintenance and access to this collection.  As for future plans, I may use this database to put a quote on my website.  So my problem is that I don't know where to go from here.  A simple interface is preferred but not necessary.  As for integrating the database with my website, my hosting environment is the math department at Kansas State University.  Thus my host will change when I change schools.  Any suggestions or criticisms of my plans are welcome.

    ok poet... here's a quick breakdown:
    sqlite (mentioned twice) is rather interesting.  It's self contained, meaning you don't need a seperate daemon running to retrieve the data (check out trac, it uses sqlite exclusively for all it's wiki data).  There are also numerous sqlite language extensions (pysqlite is in my repo + AUR)
    mysql most likely outperforms sqlite, however it requires a decent chunk of db knowledge, and runs a seperate daemon...
    alot of people love mysql, but me, not being a web developer, would spring for sqlite instead.

  • Advice needed - IE Browser

    I'm not sure if this is the correct forum but I need some good advice on where to start. I like the idea of keeping my applications 100% Java or as near as possible. I am not a windows programmer and I haven't used the ActiveX Bridge yet.
    Problem: I need to capture events like url's and mouseovers etc from an IE Browser and send actions to the IE Browser. I need to do this using a Java application on a windows OS (doesn't matter which one). Any good pointers to where to start would be very welcome.
    Or should I give up and use C++ ?

    Well, at the risk of being battered by the gallery
    here: you application seems a natural fit for
    JavaScript.this seems pretty reasonable to me too.
    Problem: I need to capture events like url's and mouseovers etc from an IE Browser and send
    actions to the IE Browser. I need to do this using a Java application on a windows OS (doesn't > matter which one). Any good pointers to where to start would be very welcome.this confuses me.
    you are capturing events from where? (IE or java application)
    passing them to where (IE or java application)
    you mention IE for both instances but than also talk about the java application so i really don't understand the flow here... could you clarify this.
    if you are going from java to ie through you could probably just a combination of java, javascript. if you are doing something more complex you will need something more complex. i have not used any of the technologies that send allow you to access win32 apis or apps from java directly very much. if you know how to do it quickly in c++ yourself it might be wise.
    what you could do is write some sort of interface to use in java to access the IE API and then write an implementation using native methods. this would allow you to get things going now while making everything flexible so you can plug-in a pure java solution later.
    well that's my two-cents

  • Is there any way I may set low resources priority for a specific job or even all SQL jobs?

    Is there any way I may set low resources priority for a specific job or even all SQL jobs? 
    Our database is quite big and everything works OK and very fast, except SQL jobs which are used mainly for maintenance purposes.
    I have one specific job which runs for 2 minutes and takes a lot of resources which may affect execution of other stored procedure which should be executed fast.The worst part is that this job has to be executed during the most active working hours. It does
    not matter for me how long will it take to execute this job. I just do not want it to use so much resources.
    Also I noticed when SQL backup job (takes about 4 minute) is scheduled it also takes a lot of resources and sometimes because of that I am receiving "login timeout" error on my web site.

     depending on you sql server version and edition, you can use resource governor and limit the cpu and memory.
    in most typical cases , you can use the session user name and times and classify the how much cpu and memory they can use. refer : https://msdn.microsoft.com/en-us/library/cc645892.aspx
    but in your case, the problem seems to be with the maintenance jobs which could run as sql server service account.
    also, if the backup are third party tool backup, you can specify the priority level and other options etc to make it less priority.
    it is possible to do that even with native backups such as specifying the limiting buffercount, max transfer size, if the server is memory pressured. sometimes, stripping the backups to multiple sets across different devices, could help as well. even though
    stripping the backups can introduce another complexity.
    may be you need to rethink you back up\recovery strategy.
    Hope it Helps!!

  • About to buy: Asus g75vx w/ GTX 670mx. Advice needed...

    Hi all,
    I've been lurking this past week, trying to decide on a laptop, and am about to bite the bullet on an Asus G75vx (t4020h) laptop to run Premiere CS6.
    I've read through threads here that have mentioned this machine, but I would really appreciate it if someone could give me a quick thumbs up or down. The laptop will be for editing only. Formats and media I will be using which I imagine will stress the computer most: P2 video (avc intra 100) and RED 4k.
    I'm buying it through Ebay:
    http://www.ebay.de/itm/ASUS-R-O-G-GAMER-G75VX-i7-3630QM-120GB-SSD-750GBHD-32GB-RAM-Nvidia- GTX670-MX-3GB-/290851609890?pt=DE_Technik_Computer_Peripherieger%C3%A4te_Notebooks&hash=it em43b81b5d22#ht_8545wt_1346
    The processor is good enough, I believe (i7 3630), and the RAM is maxed out.
    The graphics card? It seems a better bet than the 660m or 670m model, but I don't know for sure.
    There's a 120gb SSD, and a 750gb 7200rpm HDD. I will try this config out first, and if read-times seem slow I'll have to put a RAID system together.
    The hdd is a hybrid drive. I'm not sure I want or need this, so perhaps someone could help me decide on this point?
    Any other thoughts are welcome. I think there are a few members here who are using an g75 with cs6. Any pitfalls or blind spots I should know about?
    thanks all,
    O

    A few reasons:
    The first is that I have to start my first editing job immediately, and have no time to source the components and build a desktop system.
    Second, I live in a small apartment, where my girlfriend and I both work. I have an office I use occasionally when things get claustrophobic, and a laptop will mean that I'm not tied to a single location.
    Finally, I live in Berlin, where - most likely, given the market here - many of the productions I will work on will be relatively small affairs. And I'm predicting I will have to meet with clients wherever they like (usually a cafe or bar, or on set).
    I might be wrong, but in the above ways choosing a laptop as my first system broadens my options, workwise. I want to build a desktop system too, eventually, but not before this venture gains momentum.
    I know that a laptop will also limit me too, which is why I have to ensure that I get something that will perform well. It's quite scary to imagine that I might find myself with a machine that won't deliver, but I also have to consider the practical constraints I mentioned above.
    I think, also, that a HP Elitebook with similar spec would be a better bet, but the cost is prohibitive right now (an initial outlay of approximately 1k more). So, I would really like to know if the Asus route is going to be worth the investment, especially if I will have to configure a RAID system to replace the SSD boot+app drive and HHDD media drive.
    Any comments welcome at this point! I just need some 'fatherly' advice

Maybe you are looking for

  • Why can I not open a new window but I can open new tabs in V3.6.16?

    I am running Firefox 3.6.16 on both my PC and my laptop. The program works fine on my PC, I can open both new windows and new tabs. However, on my laptop I can not open a new window. Tabs open just fine, but when I try to open a new window nothing ha

  • Missing Standard country version (and template)

    Hi, In our system I am about to create a company code for Mexico but before doing so I checked what local customising for Mexico was available in configuration as standard. Unfortunately for whatever reason, the country version settings for Mexico an

  • How to break fields in reports

    Hi in a report i am having a field having a max size of 150 charcaters. now when it has more than 40 characters they are printed in one straight line. because of this the form size increases horizontally and u have to scroll horizontally to view it a

  • Pressing Download, Nothing happens

    Hi, I have just purchased a 12 month plan for Adobe Premier Pro CS6. It has been processed but when I press download, a screen says that windows needs to launch an external application, I click launch application, I get the loading wheel. Then nothin

  • Incorrect/duplicate result set

    In my context index, each document_id is unique. However, when I do a search on one specific document_id, the exact same document is returned to me twice: select document_id from books where contains(user_dummy, '402039968'); document_id 402039968 40