[SOLVED] Best practice on updating git-sourced packages?

So quick question. Im maintaining an AUR package that grabs sources from git. How should I update the package? Will the package be automatically updated when the git repo has new commits, or do I have to manually increment the pkgrel variable in the PKGBUILD or is there something else I should be doing?
Cheers!
Last edited by Zygote (2014-04-03 07:04:44)

Zygote wrote:Im maintaining an AUR package that grabs sources from git. How should I update the package?
As a maintainer, you shouldn't "update" anything. As long as the PKGBUILD file successfully builds the package then it's good!
Will the package be automatically updated when the git repo has new commits
No, it will not. There is nothing automatic about the AUR.
do I have to manually increment the pkgrel variable in the PKGBUILD or is there something else I should be doing?
Updating a package that relies on git is the responsibility of the end user.
As an end user, it's my responsibility to read the project's commit messages and mailing lists and bug reports, decide that the software is stable, and when I have enough time, run "makepkg" (or an AUR helper) to create a fresh new package based on the latest source code from the git repository.

Similar Messages

  • What is the best practice in securing deployed source files

    hi guys,
    Just yesterday, I developed a simple image cropper using ajax
    and flash. After compiling the package, I notice the
    package/installer delivers the same exact source files as in
    developed to the installed folder.
    This doesnt concern me much at first, but coming to think of
    it. This question keeps coming out of my head.
    "What is the best practice in securing deployed source
    files?"
    How do we secure application installed source files from
    being tampered. Especially, when it comes to tampering of the
    source files after it's been installed. E.g. modifying spraydata.js
    files for example can be done easily with an editor.

    Hi,
    You could compute a SHA or MD5 hash of your source files on
    first run and save these hashes to EncryptedLocalStore.
    On startup, recompute and verify. (This, of course, fails to
    address when the main app's swf / swc / html itself is
    decompiled)

  • Best Practice for Updating children UIComponents in a Container?

    What is the best practice for updating children UIComponents in response to a Container being changed?  For instance, when a Canvas is resized, I would like to update all the children UIComponents height and width so the content scales properly.
    Right now I am trying to loop over the children calling InvalidateProperties, InvalidateSize, and InvalidateDisplayList on each.  I know some of the Containers such as VBox and HBox have layout managers, is there a way to leverage something like that?
    Thanks.

    you would only do that if it makes your job easier.  generally speaking, it would not.
    when trying to sync sound and animation i think most authors find it easiest to use graphic symbols because you can see their animation when scrubbing the main timeline.  with movieclips you only see their animation when testing.
    however, if you're going to use actionscript to control some of your symbols, those symbols should be movieclips.

  • SQL 2008 R2 Best Practices for Updating Statistics for a 1.5 TB VLDB

    We currently have a ~1.5 TB VLDB (SQL 2008 R2) that services both OLTP and DSS workloads pretty much on a 24x7x365 basis. For many years we have been updating statistics (full scan- 100% sample size) for this VLDB once a week on the weekend, which
    is currently taking up to 30 hours to complete.
    Somewhat recently we have been experiencing intermitent issues while statistics are being updated, which I doubt is just a coincidence. I'd like to understand exactly why the process of updating statistics can cause these issues (timeouts/errors). My theory
    is that the optimizer is forced to choose an inferior execution plan while the needed statistics are in "limbo" (stuck between the "old" and the "new"), but that is again just a theory. I'm somewhat surprised that the "old" statistics couldn't continue to
    get used while the new/current statistics are being generated (like the process for rebuilding indexes online), but I don't know all the facts behind this mechanism yet so that may not even apply here.
    I understand that we have the option of reducing the sample percentage/size for updating statistics, which is currently set at 100% (full scan).  Reducing the sample percentage/size for updating statistics will reduce the total processing time, but
    it's also my understanding that doing so will leave the optimizer with less than optimal statistics for choosing the best execution plans. This seems to be a classic case of not being able to have one’s cake and eat it too.
    So in a nutshell I'm looking to fully understand why the process of updating statistics can cause access issues and I'm also looking for best practices in general for updating statistics of such a VLDB. Thanks in advance.
    Bill Thacker

    I'm with you. Yikes is exactly right with regard to suspending all index optimizations for so long. I'll probably start a separate forum thread about that in the near future, but for now lets stick to the best practices for updating statistics.
    I'm a little disappointed that multiple people haven't already chimed in about this and offered up some viable solutions. Like I said previously, I can't be the first person in need of such a thing. This database has 552 tables with a whole lot more statistics
    objects than that associated with those tables. The metadata has to be there for determining which statistics objects can go (not utilized much if at all so delete them- also produce an actual script to delete the useless ones identified) and what
    the proper sample percentage/size should be for updating the remaining, utilized statistics (again, also produce a script that can be used for executing the appropriate update statistics commands for each table based on cardinality).
    The above solution would be much more ideal IMO than just issuing a single update statistics command that samples the same percentage/size for every table (e.g. 10%). That's what we're doing today at 100% (full scan).
    Come on SQL Server Community. Show me some love :)
    Bill Thacker

  • EBS Supplier best practice to update vendor site code, update or create a new one

    I have a question related to EBS Supplier vendor site code. Application lets you update the vendor site code, but what is the best practice to update the site code?....would you inactivate the exiting one and create a new one? or would you just update the existing value?

    Ok,
    My workaround was to put in my TaskFlow an action to commit. After that I put two more actions (execute) and then back to my page. This way works but I would like to know if there is any more efficient way to do this just when I am inserting.
    Regards

  • General Oracle Database Performance trouble solving best practice Steps

    We use  Oracle 11g DB on Windows2008R2 as web application backend DB.
    We have peformance trouble in that DB.
    I would like to know General Oracle Database Performance trouble solving best practice Steps.
    Is there any good general best practice document for performace trouble solving in the internet ?

    @Girish Sharma:  I disagree with this. Many people say things like your phrase "..first identify the root cause and then move forward" but that is not the first step. Any such technique is nothing more than looking at some report, finding a number that you don't like, and attempting to "fix" it. Some people use that supposedly funny term "compulsive tuning disorder" (first used by Gaja Krishna Vaidyanatha) to describe this approach (also advocated in this topic by @Supriyo Dey). The first step must be to determine what the problem is. Until you know that, all those reports you mentioned (which, remember, require EE plus pack licences) are useless.
    @teradata0802, your best practice starts by finding the problem. Is it, for example, that the overnight batch jobs don't finish until lunchtime? A screen takes 10 seconds to refresh, and your target is one second? A report takes half an hour, but you need to run it every five minutes? Determine what business function is causing your client to lose money because it is too slow. Then investigate what it is doing, how, and why. You have to begin by focussing on the problem, not by running database-wide reports..

  • SCCM 2012 R2 Driver - Best Practices on Updating Driver Packages?

    Example the new Surface Drivers were Release We are currently using September what is the best way to update the drivers?  If Import it shows multiple drivers old and new...  Thoughts?  Blog Post?

    No. You must always import drivers to be able to either one of the Apply Driver task types in a task sequence.
    However, you can also run a driver installer provided by the vendor as a package because the driver installer is a generic exe that does whatever its supposed to do outside the control of ConfigMgr.
    Note that although you can't use an Auto Apply Driver task in stand-alone media, you can absolutely use an Apply Driver Package in stand-alone media. In general, most folks do not rely on Auto Apply but instead rely on Apply Driver Package for multiple reason.
    Jason | http://blog.configmgrftw.com | @jasonsandys

  • Best Practice on Updating From a DB

    Hi Everyone,
    What are some best practices surrounding getting data from an oracle database into the cache layer when a data change event (insert, update, delete) happens? I've searched far and wide and the best answer I can find is to use Extractor/Replicator -> JMS -> Subscriber -> cache.
    Thank you for your help.

    You're right, DCN is interesting idea, but it's again the case where technology is working on simple Hello World things, but fails to deliver on real word.
    To me DCN looks like an unfinished Oracle project, lot of marketing stuff, but poor features, it's good mostly to student's works or testlabs, but not for real world complexity.
    Two reasons:
    1.DCN has severe limitations on complexity of joins and queries in case you plan to use query change notification feature.
    2. it puts too bug pressure on database by creating a tons on events, when you don't need and don't expect them, because it's too generic.
    Instead of DCN, create ordinary Oracle AQ queues, using tiny SQL object type event as a payload, then create triggers and/or PL/SQL stored procedures, which ale filling the event with all the primary keys you need and the unique ID of the object you need to extract.
    Triggers will filter out unnesessary updates, sending events only when you wish.
    If conditions are too complex for triggers, you may create & place events either by call from the event source app itself or on scheduled basis, it's entirely up to you. Also, technique with creating object views + using instead of trigger on this object view works pretty well.
    And finally, implement listener at Coherence side, which will be reading the event, making necessary extracts & assemble Java object ready to be placed into the cache, based on the event ID and set of event's primary keys. After Java object will be assembled, you can place it into the cache.
    Don't use Hibernate, TopLink or any other relational-to-object frameworks, they're too slow and add excessive and unnecessary overhead to the process, use standard Oracle database features, they're much faster and transaction-safe. Usage of these frameworks within 10g or 11g database is obsolete and caused mainly by lack of knowledge among Java developers about database features on this regard.
    In order to make a whole system fail-safe and scalable, you have to implement listener in fail-safe fashion, in a form of workmanager + slave processes, spawned on the other nodes.Work manager has to be auto fail-safe and auto scalable, so that if the node holding work manager instance fails due to cache cluster member departure or reset or smth else, another workmanager is automatically spawned on first available node.
    Also, workmanager should spread & synchronize the work among the slave listener processes based on the current cache cluster members, automatically re-balancing and recovering work in case of cache member join/departure.
    Out-of-the box Coherence has an implementation of workmanager, but it's not fail-safe and does not provide automatic scale-up/recover work features described above, so you have to implement your own.
    All the features I've described are implemented and happily used in complex OLTP + workflow system backed up by big Oracle RAC cluster with huge workload, processing millions transactions per day.

  • Best practice for updating ATWRT (Characteristic Value) in AUSP

    I've notice that when we change the Characteristic Value of a Classification, that it does not update in the MM record. We have to go into MM02 for each material number that references Char Value and manually change it for row in AUSP to get updated.
    Should i just create a report to Loop through and update table AUSP directly? Or is there a better way to do this via a function or BAPI etc? Wanting to know what best practice is recommended.

    Hi Scott
    You can use a BAPI to do that.
    Check the following thread:
    BAPI to update characteristics in Material master?
    BR
    Caetano

  • Best Practice: Usage of the ABAP Packages Concept?

    Hi SDN folks,
      I've just started on a new project - I have significant ABAP development experience (15 years+) - but one thing that I have never seen used correctly is the Package concept in ABAP - for any of the projects that I have worked on.
    I would like to define some best practices - about when we should create packages - and about how they should be structured.
    My understanding of the package concept is that they allow you to bundle all of the related objects of a piece of development work together. In previous projects - and almost every project I have ever worked on - we just have packages ZBASIS, ZMM, ZSD, ZFI and so on. But this to me is a very crude usage of packages, and really it seems that we have not moved on passed the 4.6 usage of the old development class concept - and it means that packages do not really add much value.
    I read in the SAP PRESS Next Generation ABAP book (Thomas Ljung, Rich Hellman) (I only have the 1st edition) - that we should use packages for defining separation of concern for an application. So it seems there they are recommending that for each and every application we write - we define at the least 3 packages - one for Model, one for Controller and one for view based objects. It occurs to me that following this approach will lead to a tremendous number of packages over the life cycle of an implementation, which could potentially lead to confusion - and so also add little value. Is this really the best practice approach? Has anyone tried this approach across a full blown implementation?
    As we are starting a new implementation - we will be running with 7 EHP2 and I would really like to get the most out of the functionality that is provided. I wonder what others have for experience in the definition of packages.
    One possible usage occurs to me that you could define the packages as a mirror image of the application business object class hierarchy (see below). But perhaps this is overcomplicating their usage - and would lead to issues later in terms of transportation conflicts etc.:
                                          ZSD
                                            |
                    ZSOrder    ZDelivery   ZBillingDoc
    Does anyone have any good recommendations for the usage of the ABAP Package concept - from real life project experience?
    All contributions are most welcome - although please refrain from sending links on how to create packages in SE80
    Kind Regards,
    Julian

    Hi Julian,
    I have struggled with the same questions you are addressing. On a previous project we tried to model based on packages, but during the course of the project we encountered some problems that grew overtime. The main problems were:
    1. It is hard to enforce rules on package assignments
    2. With multiple developers on the project and limited time we didn't have time to review package assignment
    3. Devopelers would click away warnings that an object was already part of another project and just continue
    4. After go-live the maintenance partner didn't care.
    So, my experience is is that it is a nice feature, but only from a high level design point of view. In real life it will get messy and above all, it doesn't add much value to the development. On my neew assignment we are just working with packages based on functional area and that works just fine.
    Roy

  • Best practice to update to Snow Leopard

    I just placed my family pack order on Amazon.com for Snow Leopard. But this will be the first time for me doing an OS upgrade on a Mac (all 4 Macs in my house came with Leopard on them so we've only done the "software update" variety). I am a reformed PC guy so humor me!
    What is the best practice to upgrade from 10.5.8 to 10.6? On a PC, my inclination would be to back up my data, reformat the whole drive and install Windows fresh... then all my apps.. then the data. I hate that and it takes hours.
    What is the best practice way to upgrade the Mac OS?

    The best option is Erase and Install. The next best option is Archive and Install. Use the latter if you do not want to or can't erase your startup volume.
    How to Perform an Archive and Install
    An Archive and Install will NOT erase your hard drive, but you must have sufficient free space for a second OS X installation which could be from 3-9 GBs depending upon the version of OS X and selected installation options. The free space requirement is over and above normal free space requirements which should be at least 6-10 GBs. Read all the linked references carefully before proceeding.
    1. Be sure to use Disk Utility first to repair the disk before performing the Archive and Install.
    Repairing the Hard Drive and Permissions
    Boot from your OS X Installer disc. After the installer loads select your language and click on the Continue button. When the menu bar appears select Disk Utility from the Installer menu (Utilities menu for Tiger.) After DU loads select your hard drive entry (mfgr.'s ID and drive size) from the the left side list. In the DU status area you will see an entry for the S.M.A.R.T. status of the hard drive. If it does not say "Verified" then the hard drive is failing or failed. (SMART status is not reported on external Firewire or USB drives.) If the drive is "Verified" then select your OS X volume from the list on the left (sub-entry below the drive entry,) click on the First Aid tab, then click on the Repair Disk button. If DU reports any errors that have been fixed, then re-run Repair Disk until no errors are reported. If no errors are reported, then quit DU and return to the installer.
    2. Do not proceed with an Archive and Install if DU reports errors it cannot fix. In that case use Disk Warrior and/or TechTool Pro to repair the hard drive. If neither can repair the drive, then you will have to erase the drive and reinstall from scratch.
    3. Boot from your OS X Installer disc. After the installer loads select your language and click on the Continue button. When you reach the screen to select a destination drive click once on the destination drive then click on the Option button. Select the Archive and Install option. You have an option to preserve users and network preferences. Only select this option if you are sure you have no corrupted files in your user accounts. Otherwise leave this option unchecked. Click on the OK button and continue with the OS X Installation.
    4. Upon completion of the Archive and Install you will have a Previous System Folder in the root directory. You should retain the PSF until you are sure you do not need to manually transfer any items from the PSF to your newly installed system.
    5. After moving any items you want to keep from the PSF you should delete it. You can back it up if you prefer, but you must delete it from the hard drive.
    6. You can now download a Combo Updater directly from Apple's download site to update your new system to the desired version as well as install any security or other updates. You can also do this using Software Update.

  • Best practice to update inline/publish folio?

    Hi there
    Think all is in my question
    I have an online application with an online folio and I need to update the same folio with a new version.
    What is the best practice to organize my work ?
    DId I have to continue working in Indesign with the same ID  but no update/re publish it in Folio Producer (this option scared me totally... what if mu draft goes online???)
    Did I have to recreate another folio and after testing it, publish it with the same Folio name and description ?? ( not sure it will update the same file as it is not the same)
    What is the best practice to organise me/my work/my file ?
    Thank U

    Hi there
    Think all is in my question
    I have an online application with an online folio and I need to update the same folio with a new version.
    What is the best practice to organize my work ?
    DId I have to continue working in Indesign with the same ID  but no update/re publish it in Folio Producer (this option scared me totally... what if mu draft goes online???)
    Did I have to recreate another folio and after testing it, publish it with the same Folio name and description ?? ( not sure it will update the same file as it is not the same)
    What is the best practice to organise me/my work/my file ?
    Thank U

  • Best Practice for Updating Infotype HRP1001 via Class / Methods

    I want to update an existing (custom) relationship between two positions.
    For example I want
    Position 1 S  = '50007200'
    Position 2 S =  '50007202'
    Relationship = 'AZCR'
    effective today through 99991231
    Is there a best practice or generally accepted way for doing this using classes/methods rather than RH_INSERT_INFTY ?
    If so, please supply an example.
    Thanks...
    ....Mike

    Hi Scott
    You can use a BAPI to do that.
    Check the following thread:
    BAPI to update characteristics in Material master?
    BR
    Caetano

  • Best practice for updating SL to 10.6.8

    recently purchased a 2009 iMac with Snow Leopard upgrade installed  OS 10.6
    Looks as though there are two updates; should I install both or can I install the last/latest. Appreciate being directed to best practices discussions.
    FYI I will want to install Rosetta for older applications, CS3 & CS4, that I need for old client files. Thanks.
    Ali

    Buy one. Anything you want to keep shouldn't be on only one drive; problems may occur at any time, and are particularly likely to occur during an OS update or upgrade.
    (78403)

  • Best practices for updating agents

    We're getting ready to do our first system-wide update of agents to fix a critical bug. Our summer vacation is just ending, and teachers and students will be coming back very soon and turning on our Windows 7 computers for the first time in many weeks, although they won't all be turned on the same day. When they are turned on they will be attempting to get various updates, in particular Windows updates, but also Flash Player and Adobe Reader. I need to update the agents as quickly as possible, but I'm concerned about the possibility of the agent update conflicting with another update, especially Windows updates. Isn't it possible that Windows Update could restart a computer while the agent update is happening (or the other way around), leaving the machine in an unstable or unusable state? What are the best practices for dealing with this? I considered the possibility of deploying the agent to a dynamic workstation group whose members all have a certain file or files that indicate that they have already received the latest Windows updates. However, I can't see how to create a dynamic group based on such criteria.
    So far I have only updated a few devices at a time using "Deploy System Updates to Selected Devices in the Management Zone". When those updates are done I cancel that deployment because that's the only option I can find that does anything. If you can offer general advice for a better strategy of updating agents I'd appreciate that. Specifically, how would you push an agent update to several hundred computers that will be turned on sometime over the next two weeks?
    Thanks very much.

    Originally Posted by jcw_av
    We're getting ready to do our first system-wide update of agents to fix a critical bug. Our summer vacation is just ending, and teachers and students will be coming back very soon and turning on our Windows 7 computers for the first time in many weeks, although they won't all be turned on the same day. When they are turned on they will be attempting to get various updates, in particular Windows updates, but also Flash Player and Adobe Reader. I need to update the agents as quickly as possible, but I'm concerned about the possibility of the agent update conflicting with another update, especially Windows updates. Isn't it possible that Windows Update could restart a computer while the agent update is happening (or the other way around), leaving the machine in an unstable or unusable state? What are the best practices for dealing with this? I considered the possibility of deploying the agent to a dynamic workstation group whose members all have a certain file or files that indicate that they have already received the latest Windows updates. However, I can't see how to create a dynamic group based on such criteria.
    So far I have only updated a few devices at a time using "Deploy System Updates to Selected Devices in the Management Zone". When those updates are done I cancel that deployment because that's the only option I can find that does anything. If you can offer general advice for a better strategy of updating agents I'd appreciate that. Specifically, how would you push an agent update to several hundred computers that will be turned on sometime over the next two weeks?
    Thanks very much.
    To be honest, you have to work around your other deploys, etc. The ZCM agent isn't "aware" of other deploys going on. For example, ZPM doesn't care that you're doing Bundles at the same time (you'll get errors in the logs about the fact that only one MSI can run at a time, for example). ZPM usually recovers and picks up where it left off.
    Bundles on the other hand, with System Update, are not so forgiving. Especially if you have the agents prior to 11.2.4 MU1 (cache corruption errors).
    We usually:
    a) Halt all software rollouts/patching as best we can
    b) Our software deploys (bundles) are on event: user login Typically the system update is on Device Refresh, OR scheduled time, and are device associated.
    IF possible, I'd suggest that you use WOL, system update and voila.
    Or, if no WOL available, then tell your users to leave their pc turned on (doesn't have to be logged in), on X night, and setup your system updates for that night, with the auto-reboot enabled. That worked well
    But otherwise the 3 components of ZCM (Bundles, ZPM, System Update) don't know/care about each other, AFAIK.
    --Kevin

Maybe you are looking for